Next Article in Journal
Rheological Characterization and Shale Inhibition Potential of Single- and Dual-Nanomaterial-Based Drilling Fluids for High-Pressure High-Temperature Wells
Previous Article in Journal
Evaluation of Mechanical Properties and Environmental Impact of Red Mud-Based Silty Soil Modified by Inorganic Binding Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved African Vulture Optimization Algorithm for Optimizing Nonlinear Regression in Wind-Tunnel-Test Temperature Prediction

1
College of Mechanical and Electrical Engineering, Shenyang Aerospace University, Shenyang 110136, China
2
Aerodynamics Research Institute, The Aviation Industry Corporation of China, Shenyang 110167, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(7), 1956; https://doi.org/10.3390/pr13071956
Submission received: 23 May 2025 / Revised: 15 June 2025 / Accepted: 18 June 2025 / Published: 20 June 2025
(This article belongs to the Section Process Control and Monitoring)

Abstract

The thermal data of the hypersonic wind tunnel field accurately reflect the aerodynamic performance and key parameters of the aircraft model. However, the prediction of the temperature in hypersonic wind tunnels has problems such as a large delay, nonlinearity and multivariable coupling. In order to reduce the influence brought by temperature changes and improve the accuracy of temperature prediction in the field control of hypersonic wind tunnels, this paper first combines kernel principal component analysis (KPCA) with phase space reconstruction to preprocess the temperature data set of wind tunnel tests, and the processed data set is used as the input of the temperature-prediction model. Secondly, support vector regression is applied to the construction of the temperature prediction model for the hypersonic wind-tunnel temperature field. Meanwhile, aiming at the problem of difficult parameter-combination selection in support vector regression machines, an Improved African Vulture Optimization Algorithm (IAVOA) based on adaptive chaotic mapping and local search enhancement is proposed to conduct combination optimization of parameters in support vector regression. The improved African Vulture Optimization Algorithm (AVOA) proposed in this paper was compared and analyzed with the traditional AVOA, PSO (Particle Swarm Optimization Algorithm) and GWO (Grey Wolf Optimizer) algorithms through 10 basic test functions, and the superiority of the improved AVOA algorithm proposed in this paper in optimizing the parameters of the support vector regression machine was verified in the actual temperature data in wind-tunnel field control.

1. Introduction

When conducting wind-tunnel tests, due to the complexity of the airflow in the wind-tunnel flow field, the interaction between the airflow and the aircraft model, as well as the influence of various environmental factors, the wind-tunnel test data are typically nonlinear time series. The study of time series began in 1927 when Yule invented the autoregression method [1]. Researchers have successively proposed research methods for univariate and multivariate time series [2]. For nonlinear time series, researchers have proposed the threshold autoregression method. The autoregressive conditional heteroscedasticity method and its improved methods, etc., are discussed in Tsay (1989) [3]. The most representative example is that of 2011, namely, Jones et al., who, for the hypersonic intermittent blowing wind tunnel, abstractly regarded the entire wind-tunnel system as three pressure-vessel units and constructed a nonlinear mathematical model about the dynamic change of temperature based on the principle of mass continuity of pressure vessels. This was to achieve precise and stable regulation of the air density, air flow velocity and air-volume flow rate in the wind tunnel [4]. The above-mentioned nonlinear prediction method belongs to parametric modeling. Essentially, it belongs to the same parametric modeling framework as the linear method. Its core defect lies in the lack of dynamic adaptability of the prior assumptions of the model structure, and the established model exhibits a limited generalization capability. In the practical application scenarios of temperature prediction in wind-tunnel systems, the characteristics of wind-tunnel systems are often formed by the combined action of multiple factors such as multi-physical field coupling, multi-scale effects, and non-stationary random disturbances. The traditional parametric model is limited by the subjective bias of expert experience, and its preset model structure makes it difficult to match the dynamic evolution of system characteristics in real time. When the system undergoes sudden parameter changes, structural variations or environmental disturbances, the mismatch degree between the prior assumed model structure and the actual system characteristics will increase significantly, resulting in a nonlinear growth trend of the model prediction error. Therefore, breaking through the parametric modeling approach and developing intelligent modeling methods with self-learning and self-evolution capabilities has become the key technical bottleneck for improving the predictive performance of wind-tunnel systems.
At present, data-driven time series prediction models have gradually grown and developed. Compared with traditional linear prediction models such as univariate linear regression and the multivariate autoregressive moving average, as well as some models that can only handle specific nonlinear forms, data-driven methods break the inherent constraints of the model structure and data characteristics, and have stronger nonlinear approximation capabilities. In the prediction of time series, by constructing advanced neural network models such as the Convolutional Neural Network (CNN), Fuzzy Neural Network (FNN), Wavelet Neural Network (WNN), Long Short-Term Memory (LSTM), and Spike Neural Network (SNNs), it has been possible to achieve high-precision fitting and prediction of time series [5]. However, in neural networks and their corresponding algorithms, sufficient data samples are required and the model convergence process is slow. For the characteristics of small sample size and the short-term nature of wind-tunnel test data, support vector regression is more suitable compared to neural-network prediction. The support vector regression method has a strict theoretical and mathematical basis and is highly robust. In the temperature prediction of wind tunnels, the prediction accuracy of support vector regression is improved through intelligent optimization algorithms [6].
Meanwhile, with the development of swarm-intelligence algorithms, researchers draw inspiration from the living habits of social animals in nature, thereby creating intelligent optimization algorithms for solving complex engineering problems [7]. The essence of intelligent optimization algorithms is to simulate the evolution laws of natural or social systems through randomized search strategies and to estimate the optimal or approximate optimal solutions of the objective function in the complex solution space [8]. Algorithms can be classified according to their different design principles. The Genetic Algorithm (GA) [9], Differential Evolution (DE) [10] and Evolutionary Strategy (ES) belong to evolutionary algorithms that drive the iterative optimization of populations through genetic operators such as selection, crossover and mutation [11]. The Ant Colony Optimization Algorithm (ACO) [12], Particle Swarm Optimization Algorithm (PSO) [13] and Artificial Bee Colony Algorithm (ABC) all belong to swarm-intelligence algorithms [14], and they solve complex tasks through information sharing and collaborative actions among individuals. Facing the nonlinear, multimodal and complex constraint conditions existing in practical engineering applications, higher requirements have been put forward for optimization algorithms.
This paper addresses the issues such as slow convergence speed and insufficient accuracy when dealing with nonlinear and complex problems in the African Vulture Optimization Algorithm. To enhance the algorithm performance, the improved African Vulture Optimization Algorithm based on chaotic mapping and the local search enhancement method is proposed. Meanwhile, aiming at solving the problem of difficult parameter-combination selection in support vector regression machines, the improved vulture algorithm was then applied to the parameter optimization of support vector regression for predicting the temperature in the wind-tunnel field. The improved regression model, by using vulture optimization algorithm, demonstrates the better prediction accuracy and the lowest error index, providing a solution for practical applications.

2. Basic Theory

2.1. Kernel Principal Component Analysis

Conducting kernel principal component analysis (KPCA) can achieve nonlinear dimensionality reduction of data sets. KPCA is usually used to handle linearly inseparable data sets [15]. For the wind-tunnel test dataset, a nonlinear mapping was first used to map all the samples in the dataset to a high-dimensional or even infinite-dimensional space, called the feature space, to make it linearly separable. Then, Principal Component Analysis (PCA) was performed for dimensionality reduction in this high-dimensional space [16]. Specifically, it can be expressed by taking the l × n dimensional temperature data matrix X as the input space for nonlinear mapping, ϕ for mapping the data sample X = x 1 , x 2 , , x n input space X to the feature space H and denoting the feature space H as the l × d dimension. The mapped dataset ϕ ( x ) = φ 1 , φ 2 , , φ d was obtained. At this point, the temperature dataset in this feature space had completed the centralization processing by default. It can be known that
ϕ ( x i ) ϕ ( x j ) T = 0
Among these terms, i = 1 , 2 , , n , j = 1 , 2 , , n and i j , and the superscript T represents the transpose of the matrix.
PCA analysis was performed on the dataset ϕ ( x ) and the covariance matrix C was calculated:
C = 1 n i = 1 n ϕ ( x i ) ϕ ( x i ) T
Diagonal processing C was carried out on the obtained eigenvalues λ and the corresponding eigenvectors V , where the following relationship was satisfied:
C = 1 n i = 1 n ϕ ( x i ) ϕ ( x i ) T
Since V s p a n ϕ ( x 1 ) , ϕ ( x 2 ) , , ϕ ( x i ) , it can be known that there is a relationship α 1 , α 2 , , α n among a set of coefficients:
V = j = 1 n α j ϕ ( x j )
V was substituted into the characteristic Equation (3), and ϕ ( x k ) T , k = 1 , 2 , , n were multiplied with both sides of the equation to the left simultaneously to obtain
1 n i = 1 n α i ϕ ( x k ) T ϕ ( x i ) ϕ ( x i ) T ϕ ( x j ) = λ i = 1 n α i ϕ ( x k ) T ϕ ( x i )
Introducing the kernel function K can obtain the non-zero characteristic solution of this problem. A kernel function K i j = ( ϕ ( x i ) T ϕ ( x j ) ) = K ( X i , X j ) of n × n was defined, and then Formula (5) could be simplified to
K α = n λ α
Meanwhile, to satisfy the default centralization assumption, the kernel matrix was adjusted to
K ¯ = K I n K K I n + I n K I n
where I n i j = 1 n .
According to Formula (7), the kernel matrix K ¯ was obtained, K ¯ , the eigenvalues V and the eigenvectors of the matrix λ were calculated, and the eigenvectors λ were normalized:
α k = 1 λ k V k
The cumulative B 1 , B 2 , , B n contribution rate of the eigenvalues λ 1 , λ 2 , , λ n was calculated. According to the cumulative contribution rate P of the principal components that had been set, when B K P , the k principal component and the corresponding principal component α 1 , α 2 , , α k should be selected. By calculating the projections of the original sample data in these selected principal component directions Y , the dimensionally reduced wind-tunnel temperature test dataset was obtained.
Among them, the kernel functions mainly choose polynomial kernel functions and Gaussian kernel functions [17]. This paper selects the Gaussian kernel function, and its expression is as follows:
K x i , x j = exp x i x j 2 2 σ 2
Among them, x i , x j are two sample vectors in the input space, and σ are the bandwidth parameters of the kernel function.
After performing KPCA dimension reduction on the wind-tunnel test data, the data was reconstructed through phase space. Phase Space Reconstruction (PSR), as the core link of chaotic time series analysis, is a key step to revealing the intrinsic dynamic characteristics of the system. Through precise phase-space reconstruction, a nonlinear prediction model of the system can be established, providing important theoretical support for the control and prediction of chaotic systems.
After performing KPCA dimension reduction on the wind-tunnel test data, phase-space reconstruction processing was carried out. Phase-space reconstruction (PSR) is the core link of chaotic time series analysis and a key step to revealing the inherent dynamic characteristics of the system. Through precise phase-space reconstruction, a nonlinear prediction model of the system can be established, providing important theoretical support for the control and prediction of chaotic systems. The reconstructed wind-tunnel test data were used as the input of the subsequent model. The test statistical sequence S ¯ ( τ ) , the maximum deviation Δ S ¯ ( τ ) and the indicators S c o r ( τ ) were drawn. The Δ S ¯ ( τ ) first minimum value was taken as the optimal time delay τ d , and the global minimum value S c o r ( τ ) was taken as the time window length τ w of the time series. Through the delay time τ and embedding dimension m a phase space that was the same as the original system in the topological sense was reconstructed to improve the adaptability of the prediction model for the wind-tunnel test data.
τ w = ( m 1 ) τ d

2.2. Support Vector Regression

Support Vector Regression (SVR), as an extension of SVM in the field of regression, breaks through the limitation that traditional linear regression can only handle linear relationships. By introducing kernel techniques and nonlinear mapping mechanisms to achieve the precise modeling of complex nonlinear systems, SVR is one of the typical applications in SVM [18].
The essence of SVM is to look for a given sample x i , y i i = 1 , 2 , , H set for the generalized linear classification of the superplane logarithm S x , y = 0 , in which x i is the i dimension H input vector, y i is the i scalar output and H is the number of the sample number [19].
Considering that the temperature data are nonlinear, it was necessary to transform the nonlinearity of the input sample space to a high-dimensional linear feature space and solve the nonlinear problem in this high-dimensional space. Then, in the SVR algorithm, the function model can be expressed as [20]
f x = w T φ x + b
Among these terms, f ( x ) is the decision function, w is the coefficient of the model, T is the transpose of the matrix, b is the bias of the model and φ represents the mapping function from low-dimensional features to high-dimensional features.
When the absolute distance between the predicted value output by the model and the true value is less e than, the value of the loss function ε g is always is 0 . Then the model of the loss function can be expressed as
ε x i , y i = 0 y i f ( x i ) e y i f ( x i ) e y i f ( x i ) > e
By finding the coefficients w of the optimal model and the bias b of the model, the obtained optimization problem can be expressed as
min 1 2 w 2 + C i = 1 M ε ( f ( x i y i ) )
SVR can handle nonlinear regression problems by introducing kernel functions. The role of the kernel function is to map the input data from the low-dimensional space to the high-dimensional feature space, thereby finding the optimal regression hyperplane in the high-dimensional space.
The Radial Basis Function (RBF) kernel is widely used in kernel methods due to its several advantages. First, it is capable of handling non-linear relationships by mapping the input data into an infinite-dimensional space, allowing the model to capture complex patterns. Additionally, the RBF kernel is highly flexible, as it has a single parameter, the bandwidth, which can be adjusted to control the smoothness of the decision boundary. This flexibility enables it to perform well across a wide range of datasets, especially when the underlying data distribution is not linear. Consequently, in this paper, the kernel function of SVR was selected as the radial basis kernel function:
K ( x i , x j ) = exp ( x i x j 2 2 σ 2 ) = exp ( g x i x j 2 )
The bandwidth of the Gaussian Radial Basis Kernel Function (RBF) can be determined by g in order to affect the distribution of sample points in the high-dimensional feature space. The RBF kernel function has a hyperparameter and strong approximation in support vector regression. The accuracy of support vector regression depends on the penalty factor C and the selection of the kernel function parameters g . Therefore, when using the RBF kernel function, an intelligent optimization algorithm was adopted to optimize and compare its parameters.

2.3. African Vulture Optimization Algorithm

The African Vultures Optimization Algorithm (AVOA) is a novel meta-heuristic swarm-intelligence optimization algorithm proposed by Abdollahzadeh et al. in 2021 [21]. The AVOA algorithm, as a newly proposed swarm-intelligence optimization algorithm, possesses unique search strategies and dynamic mechanisms. Compared to more commonly used swarm-intelligence algorithms, AVOA demonstrates superior performance when dealing with complex optimization problems, especially in avoiding local optima and enhancing search efficiency. In the process of the mathematical modeling of traditional AVOA, the virtual spatial position of each individual in the vulture population is mathematically modeled in vector form. The vulture population can be expressed as
X = X 1 , X 2 , , X i , , X n T
X i = X i , 1 , X i , 2 , , X i , j , , X i , D
Among these terms, n represents the number of the vulture population, X i represents the position of the first vulture in the population, X i , j represents the position of the i vulture in the j dimension of the population, D represents the highest dimension of the vulture search space and T represents the transpose of the matrix. During the actual operation stage of the algorithm, based on the dynamic fitness evaluation results of individual vultures, the spatial position parameters were adjusted. The fitness of the vultures is expressed as
F X = f X 1 , f X 2 , , f X n T
Among these terms, each row of data in the F X matrix corresponds to the fitness value of each vulture individual in the population.
The execution process of the AVOA algorithm can be systematically broken down into five core stages.
(1)
Population grouping and selection of the leading vulture
Based on the results of the fitness values, the vulture population was divided into three groups in descending order according to the size of the fitness values by using a stratification strategy:
X b e s t t = X b e s t 1 t , θ i = α X b e s t 2 t , θ i = β
(2)
Calculate the hunger level of vultures
The hunger degree of vulture individuals can be expressed as
S i t = 2 r 1 t + 1 × h t × 1 t T + τ t
(3)
Exploration Stage
When the hunger level of individual vultures S i t 1 was satisfied, the algorithm entered the exploration stage.
X i t + 1 = X b e s t t G i t × S i t , r p 1 t p 1 X D t S i t + r 3 t × u b l b × r 4 t + l b , r p 1 t > p 1
(4)
Pre-mining stage
When the hunger level of individual vultures 0.5 S i t < 1 was met, the vultures would enter the mid-term mining stage.
X i t + 1 = G i t × S i t + r 5 t X b e s t t X i t , r p 2 t p 2 X b e s t t L 1 + L 2 , r p 2 t > p 2
(5)
Stage 5: The later stage of mining
When the hunger level of individual vultures S i t < 0.5 was satisfied, they entered the later stage of development, and their gathering behavior can be expressed as
X i t + 1 = M 1 + M 2 2 , r p 3 t p 3 X b e s t t X b e s t t X i t × S i t × l e v y ( d ) , r p 3 t > p 3
The above content systematically expounds the theoretical basis of the African Vulture Optimization Algorithm (AVOA), including its bio-inspired mechanism, search strategy and dynamic strategy-switching mechanism.

3. Optimized Model for Wind-Tunnel Thermal Prediction

Since its proposal, the AVOA algorithm has attracted the attention of many scholars and has achieved a large number of research results in the fields of new energy, medical and health care and mechanical engineering [22]. This paper innovatively combines AVOA with Support Vector Regression (SVR) and proposes an Improved African Vulture Optimization Algorithm (IAVOA) based on adaptive chaotic mapping and local search enhancement to conduct combined optimization of the parameters in support vector regression, and it shows the optimal prediction accuracy in the temperature prediction of actual wind-tunnel test data. All the experiments were conducted in the MATLAB R2021b environment.

3.1. Improved African Vulture Optimization Algorithm (IAVOA)

(1)
Adaptive chaotic mapping
Based on the problems existing in the traditional basic AVOA, such as being prone to fall into local optimum and the search accuracy needing improvement, this study proposes an Improved African Vulture Optimization Algorithm (IAVOA). This algorithm optimizes the iterative update process of the population by introducing the Adaptive Chaotic Mapping (ACM) mechanism, in order to enhance the diversity and global-search ability of the population. Meanwhile, the Enhanced Local Search (ELS) strategy is adopted to conduct a fine search of the surrounding areas of excellent individuals in the later stage of the algorithm iteration, improving the convergence speed and solution accuracy of the algorithm.
ACM enhances the global-search ability of the algorithm by introducing nonlinear dynamic behavior and avoids falling into local optimum. The random sequence generated by the Logistic chaotic mapping function has the characteristic of distinct elements. This characteristic can effectively solve the problem of uneven initialization distribution of the population. It not only avoids the local optimal trap caused by excessive aggregation of individuals, but also prevents the reduction of search efficiency due to overly dispersed distribution. This unique property significantly enhances the global-exploration ability of the algorithm and provides a more efficient search-space traversal mechanism for the optimization process. The generation of chaotic sequences using Logistic chaotic mapping can be expressed as
γ t + 1 = r γ t 1 γ t
Among these terms, r is the model control parameter, and r = 4 is taken. t represents the current number of iterations and t = 1 , 2 , , T _ M a x , T _ M a x is the maximum number of iterations. γ t is a chaotic mapping variable that changes with the number of iterations. The parameters of the chaotic mapping are dynamically adjusted according to the fitness value of the current iteration to enhance the adaptive ability of the algorithm. In the actual application process, the initial chaotic mapping variable parameter was set as γ 0 = 0.7 .
Chaotic sequence values and the distribution histogram of chaotic values are important tools for studying the characteristics of chaotic systems. Figure 1 shows the chaotic sequence values and the distribution histogram of chaotic values of the Logistic map after 100 iterations.
Chaotic systems have a sensitive dependence on initial conditions. The chaotic sequences they generate seem random, but in fact follow certain inherent laws. The distribution histogram can visually display the values of chaotic sequences or the distribution of chaotic values, helping us understand the statistical characteristics of chaotic systems.
When calculating the hunger degree of individual vultures τ t , the adjustment parameter for preventing falling into the local optimal solution was introduced into the chaotic map and changed to an adaptive parameter τ ¯ t , which can be expressed as
τ ¯ t = γ t × U × sin ω π 2 × t T + cos π 2 × t T 1
γ i t = 4 × γ i t 1 × 1 γ i t 1
Among these terms, U is a random number uniformly distributed in the interval [ 2 , 2 ] .
When the vulture population had completed all the mining strategies, chaotic disturbances were introduced to the current position of the vultures, and the position update mode was determined through the size relationship of random numbers. When r p 3 t p 3 , upper part of Formula (22) becomes
X i t + 1 = M 1 + M 2 2 + γ i t × R i t × X b e s t t X i t
When r p 3 t > p 3 , lower part of Formula (22) becomes
X i t + 1 = X b e s t t X b e s t t X i t × S i t × l e v y ( d ) + γ i t × R i t × X b e s t t X i t
where R i t is a random number and R i t 0 , 1 .
At the same time, the upper u b t and lower l b t dynamic boundaries of the vulture position were recorded during each iteration update, represented by u b t and l b t , respectively. If the dynamic boundary is exceeded during the update process, bounding processing is carried out, which can be expressed as
X i t + 1 = u b t i f X i t + 1 = l b t i f X i t + 1 > u b t X i t + 1 < l b t
(2)
Local search enhancement
In the later stage of the algorithm T _ M a x / 2 < t < T _ M a x , in order to effectively avoid the predicament where the individual positions of vultures fall into local optimum, a local search-enhancement strategy was introduced. The core of this strategy lies in implementing random perturbation operations on the currently obtained optimal solution. By conducting moderate exploration near the optimal solution, the possible local optimal constraints are broken, thereby improving the accuracy and quality of the solution. During the mining stage, local search focuses on the area around the current optimal solution, conducts fine mining on it, and further optimizes the performance of the solution. By integrating the local search mechanism into both key stages, its advantages can be fully exerted, enhancing the global-search ability and local-development ability of the algorithm, and ensuring that the algorithm can maintain efficient and stable search performance at different stages. Local search can be expressed as
X ¯ b e s t t = X b e s t t + 0.1 × R i t u b t l b t
where R i t is a random number and R i t 0 , 1 .
After introducing the random perturbation mechanism to enhance the local search efficiency, in order to further optimize the convergence and stability of the algorithm, this study implemented the retention strategy of the optimal individual based on the hunger index corresponding to the optimal individual of the vulture. Specifically, during the iterative process of the algorithm, when the local search ability is enhanced through random perturbation, the system will conduct a comprehensive evaluation of the performance of all vulture individuals. The above process can be expressed as
X b e s t t = X b e s t t f ( X b e s t t ) < f ( X ¯ b e s t t ) X b e s t t = X ¯ b e s t t f ( X b e s t t ) f ( X ¯ b e s t t )
Based on the improved scheme of the African Vulture Optimization Algorithm described in the previous section, in order to clearly illustrate the specific implementation process and logical structure of the algorithm, the corresponding pseudo-code of this improved algorithm is presented, as shown in Algorithm 1. The corresponding flowchart is also provided, as depicted in Figure 2.
Algorithm 1 Improves of the African Vulture Optimization Algorithm
Input: Population size of vultures and the maximum number of iterations T_Max
Output: The position and fitness value of the vulture
1: Set parameters and initialize the population
2: While (t < T_Max) do
3:  Calculate the fitness of vultures, select elite vultures, and store the data
4:  Set X t b e s t 1 as the vulture position (the first best position)
5:  Set X t b e s t 2 as the vulture position (the second best position)
6:  for (each vaulture) do
7:  Select X t b e s t
8:  Calculate the hunger rate S t i
9:  if ( S t i 1 ) then
10:    if ( r t p 1 p 1 ) then
11:       Update the position of the vultures
12:    else
13:       Update the position of the vultures
14:    end if
15:  else
16:    if ( S t i < 1 ) then
17:       if ( S t i 0.5 ) then
18:         if ( r t p 2 p 2 ) then
19:           Update the position of the vultures
20:         else
21:           Update the position of the vultures
22:         end if
23:       else
24:         if ( r t p 3 p 3 ) then
25:           Update the position of the vultures
26:         else
27:           Update the position of the vultures
28:         end if
29:       end if
30:    end if
31:    Update the position of the vulture according to the formula
32:    t = t + 1
33:    if (t > T_Max/2) then
34:       if ( r t p 3 p 3 ) then
35:         Update the positions of the vultures
36:       else
37:         Update the positions of the vultures
38:       end if
39:    end if
40:  end while
41: Return X t b e s t

3.2. Benchmark Test Function

This section systematically evaluates the performance of the improved African Vulture Optimization Algorithm (IAVOA) through benchmark function simulation experiments. To verify the effectiveness of the improved algorithm, ten basic test functions with different characteristics were selected. Unimodal functions (Sphere function, Ellipsoid function, Zakharov function, Quadric function, Cigar function) and multimodal functions (Rastrigin function, Ackley function, Griewank function, Rosenbrock function, Schwefel function) were used to evaluate the performance of the IAVOA algorithm, and a comprehensive test was conducted. The benchmark functions are used to test the performance of the optimization functions and to evaluate and compare the advantages of different optimization algorithms. These functions have known optimal solutions and are defined in multi-dimensional spaces to test the global search, local search and function optimization capabilities of the algorithms [23]. The IAVOA algorithm was compared with the basic African Vulture Optimization Algorithm [21], Particle Swarm Optimization Algorithm [24] and Grey Wolf Optimization Algorithm [25]. After each algorithm ran independently 30 times, an analysis and comparison were conducted. During the experiment, the performance indicators, such as the convergence curve and the accuracy of the optimal solution of the algorithm on different test functions, were recorded in detail.
Ten sets of benchmark test functions are presented, as shown in Table 1. In the table, the function f 1 f 5 is a unimodal test function, with only one global optimal solution and no local optimal solution. It is mainly used to test the convergence speed and development capability of the algorithm and to measure the local-development efficiency of the algorithm. The function f 6 f 10 is a multimodal test function, with multiple local optimal solutions, and the global optimal solution is surrounded by local optima. It is used to test the global-search ability and the ability to escape the local optima of the algorithm. To measure the global-exploration ability of the algorithm, these functions are the standard test sets in the field of optimization, which can expose the potential flaws of the algorithm, such as premature convergence and becoming trapped in local optima. The performance of the IAVOA algorithm can be evaluated through the above test functions.
Under the premise of the same environmental configuration, with the aid of the Matlab r2021b numerical computing platform, modeling work was carried out for four optimization algorithms. In terms of specific parameter settings, the population size of the four optimization algorithms was uniformly set at 30, and the maximum number of iterations was set at 500. The settings of other parameters involved in the algorithms are shown in Table 2.
The iterative curve of the test function is shown as Figure 3, and the test values of the basic functions are presented in Table 3. By comparing and analyzing the average value, standard deviation, maximum value and minimum value of the average results obtained by running each test function independently for 30 times through different algorithms, the performance of different algorithms on various test functions could be intuitively reflected. The research results show that, among the ten test functions, the IAVOA algorithm demonstrated more superior performance compared with the AVOA algorithm, the PSO algorithm and the GWO algorithm.
The experimental results show that, compared with the AVOA algorithm, the PSO algorithm and the GWO algorithm, the IAVOA algorithm achieved better optimization effects on all test functions, significantly improved the convergence speed and effectively enhanced the solution accuracy. This fully proves the effectiveness of introducing adaptive chaotic mapping and enhancing the local search strategy for the performance of the IAVOA algorithm.

3.3. Improved AVOA-Optimized SVR

Based on the IAVOA algorithm and aiming at the theoretical model of Support Vector Regression (SVR), an improved vulture algorithm was constructed to optimize the support vector regression wind-tunnel temperature prediction model (IAVOA-SVR). During the construction process of the SVR model, the data processed by kernel principal component analysis and phase-space reconstruction was taken as the input set of the model. The ratio of the training set to the test set was 6:4, and the commonly used radial basis kernel function was selected as the kernel function type. The IAVOA algorithm was used to perform parameter-optimization operations on the penalty parameters of SVR, the insensitive loss function and the parameters of the radial-basis kernel function, aiming to obtain a set of optimal parameter combinations. By reasonably adjusting these parameters, the complexity and generalization ability of the model could be effectively controlled, thereby enabling the model to exhibit better performance with different datasets and various tasks. The steps, schematic diagrams and flowcharts of IAVOA-SVR are, respectively, shown in Table 4 and Figure 4 and Figure 5.

4. Experimental Results and Analysis

4.1. Data Introduction

The wind-tunnel test data in this paper are derived from two sets of temperature and pressure data collected at the high-altitude wind-tunnel test site of AVIC Aerodynamics Research Institute (Shenyang). The wind-tunnel test data involve two sets of data sample sets of different sizes, with the sample sizes reaching 1600 cases and 2000 cases, respectively. The data dimension covers the key parameter system during the wind-tunnel test process: specifically, it includes the inlet temperature value of the wind-tunnel test section, the average outlet temperature value, the average temperature of the eight independent module areas distributed within the wind-tunnel test section, the medium temperature of the four different functional areas and the inlet pressure-parameter of the entire wind-tunnel-test section. Among the above-mentioned multi-dimensional characteristic parameters, the average outlet temperature was set as the target variable to be predicted for the research.

4.2. Data Preprocessing

For the two groups of wind-tunnel test datasets, based on the theoretical framework of dimensionality reduction of kernel principal component analysis mentioned earlier, the dimensionality reduction processing of multi-dimensional wind-tunnel test datasets was implemented in the Matlab r2021b numerical computing environment. During the dimensionality reduction process, the Gaussian Kernel function was adopted as the nonlinear mapping function, and its kernel width parameter was determined to be σ = 0.85 by using the cross-validation method. Based on the cumulative contribution rate criterion, the principal component screening threshold was set at 95%, that is, the top four principal components were selected so that their cumulative-variance interpretation rate reached more than 95% of the total variance, ensuring that the data after dimensionality reduction could not only effectively retain the original information but also significantly reduce the computational complexity.
Each group of data was divided into the training set and the test set in a ratio of 6:4. To eliminate the influence of dimensions, the experimental data were normalized and the data sizes between were mapped. The phase-space reconstruction was carried out according to the steps mentioned above. In this process, the data were decomposed into 200 disjoint time series, and the embedding dimensions were taken as 2, 3, 4, 5, with the radius r = j s i g m a / 2 ( j = 1 , 2 , 3 , 4 , s i g m a is the average value of each column). The verification statistics sequence S ¯ ( τ ) , maximum deviation Δ S ¯ ( τ ) and index curves S c o r ( τ ) of the wind-tunnel test data after dimension reduction obtained by using the Matlab r2021b platform are shown in Figure 6.
In the training dataset, cross-validation was used to determine the values of the parameters. For data 1, according to the calculation results of Matlab, the Δ S ¯ ( τ ) first minimum value is 8, the Δ S ¯ ( τ ) first minimum value taken is taken as the optimal time delay τ d = 8 and the S c o r ( τ ) global minimum value taken is taken as the time window length of the time series τ w = 190 . According to τ w = ( m 1 ) τ d , m = 22 is obtained. If it exceeds the set range, the embedding dimension is taken to be m = 5 .
For data 2, according to the calculation results of Matlab, the Δ S ¯ ( τ ) first minimum value is 7, the Δ S ¯ ( τ ) first minimum value taken is taken as the optimal time delay τ d = 7 and the S c o r ( τ ) global minimum value taken is taken as the time window length of the time series τ w = 170 . According to τ w = ( m 1 ) τ d , obtained m = 25 is obtained. If it exceeds the set range, the embedding dimension is taken as m = 5 .

4.3. Error-Evaluation Index

In the process of conducting a comprehensive and precise assessment of the performance of complex machine-learning models, the scientific and reasonable selection of error-evaluation indicators is a crucial link. These indicators are not only quantitative tools for measuring the superiority or inferiority of the model’s predictive ability, but also important bases for guiding model optimization, parameter tuning and practical application decisions.
(1) The Root Mean Square Error (RMSE) is a common indicator for measuring the difference between the predicted values of the model and the true values. The root mean square error can be expressed as
R M = 1 M 1 i = 1 M ( p i o i ) 2 1 / 2
Among these terms, M represents the sample size, p i represents the predicted value and o i represents the true value. The R M Range 0 , + , when the predicted value exactly matches the true value, is equal to 0 , that is, the perfect model. The greater the error between the predicted value and the true value, the greater the value;
(2) The Mean Absolute Percentage Error (MAPE) is a statistical indicator used to measure the relative error between the predicted values and the actual values of the prediction model. The mean absolute percentage error can be expressed as
M A = 100 % M i = 1 M o i p i o i
Among these terms, M represents the sample size, p i represents the predicted value, o i represents the true value, the M A range is 0 , + , M A is 0 represents the perfect model and greater than 1 indicates a poor model. Among multiple prediction models, the lower M A model usually has better prediction accuracy;
(3) The Normalized Root Mean Square Error (NRMSE) is an indicator used to evaluate the performance of the prediction model. The normalized root mean square error can be expressed as [26]
N R = i = 1 M p i o i 2 i = 1 M o i o ¯ 2 1 2
Among these terms, M represents the sample size, p i represents the predicted value, o i represents the true value and o ¯ represents the average value of the true value. The N R value range of is 0 , 1 ;
(4) Mean Absolute Deviation (MAD) is a statistical indicator for measuring the degree of dispersion of a set of data, representing the average value of the absolute deviation of each data point from the data mean. The mean absolute deviation can be expressed as
M D = 1 M i = 1 M o i p i
Here, M represents the sample size, p i represents the predicted value and o i represents the true value, and the mean absolute deviation is a simple extension of the absolute variance. It sums the absolute variance and divides the resulting result by the number of records. The mean absolute deviation is an error statistic value, which calculates the average distance between each pair of actual data points and the fitted data points;
(5) Coefficient of determination ( R 2 ) is an indicator for measuring the prediction accuracy of the regression model, and its value range is usually between 0 and 1. The specific formula is as follows:
R 2 = 1 i = 1 M o i p i 2 i = 1 M o i o ¯ 2

4.4. Actual Wind-Tunnel Data Analysis

A confirmatory analysis experiment was conducted on the IAVOA SVR model proposed in this chapter. The IAVOA SVR model was compared and analyzed with the benchmark SVR model, with the AVOA-SVR model adopting the traditional optimization strategy, the PSO-SVR model based on particle swarm optimization and the GWO-SVR model based on gray Wolf optimization. The optimization algorithm parameters of each comparison model were strictly set in accordance with the parameters shown in Table 2. The wind-tunnel test dataset was divided into the training set and the test set in a ratio of 6:4, and the data were subjected to kernel principal component analysis for dimension reduction and phase-space reconstruction processing. For the parameters C , g of the SVR model, the root mean square error between the predicted value and the actual value was taken as the fitness function. All experiments were completed in the Matlab r2021b environment. The error evaluation indicators of each model’s prediction for the two groups of data are shown in Table 5 and Table 6, respectively. The iterative curve of the fitness function is shown in Figure 7. The predicted values and the true values are shown in Figure 8, and the error is shown in Figure 9.
The experimental results demonstrate the effectiveness of the IAVOA algorithm in optimizing the parameters of the Support Vector Regression (SVR) model for wind-tunnel temperature prediction. The convergence speed of the IAVOA algorithm is significantly faster than that of the AVOA, PSO and GWO algorithms. To comprehensively evaluate the prediction performance of the IAVOA-SVR model, we employed various error evaluation metrics, including Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), Normalized Root Mean Square Error (NRMSE) and Mean Absolute Deviation (MAD). The experimental results show that the IAVOA-SVR model outperforms the AVOA-SVR, PSO-SVR and GWO-SVR models in all error evaluation metrics; the coefficient of determination reached over 0.99, which is the highest among the other models. However, the IAVOA algorithm has certain limitations. In some cases, IAVOA may not always be superior to other algorithms, and its performance can be affected by various factors. It may still encounter difficulties in high-dimensional complex optimization problems. Although IAVOA demonstrates excellent convergence speed and prediction accuracy in optimizing SVR, future research can focus on improving the algorithm structure of IAVOA to enhance its ability to handle high-dimensional problems, thereby expanding its applicability in various fields.

4.5. Validation of the IAVOA-SVR Model

According to the grouping proportions shown in Figure 10, the first group uses 50% for testing and 50% for training. The second group uses 66% for testing and 34% for training. The third group uses 75% for testing and 25% for training. The fourth group uses 80% for testing and 20% for training. The wind-tunnel test data were preprocessed and used as the input of the iva-svr model. Based on the MATLAB r2021b platform, the model of the IAVOA-SVR algorithm was verified. The predicted values and true values of data 1 are shown in Figure 11, the deviations are shown in Figure 12 and the error-evaluation indicators are shown in Table 7. The predicted values and true values of data 2 are shown in Figure 13, the deviations are shown in Figure 14 and the error-evaluation indicators are shown in Table 8.
In Figure 11 and Figure 12, the prediction results from data 1 show that the proposed method can closely approximate the actual temperature values in all four groups. The prediction bias in group 1 is less than 3, while the prediction bias in the other three groups is less than 1, demonstrating good predictive performance. In terms of the prediction results, as shown in Figure 13 and Figure 14, in data 2, the maximum prediction bias is less than 2.5. The prediction results further confirm the proposed method’s ability to predict wind-tunnel test temperature values. Table 7 and Table 8, respectively, calculate the error-evaluation metric values for the proposed method based on four evaluation indicators: R M , M A , N R , and M D , for data 1 and data 2. From the two tables, it can be seen that the proposed method has relatively small error values for all four indicators, demonstrating its prediction stability and effectiveness.
The IAVOA-SVR model established in this chapter has demonstrated outstanding performance in the split and cross-validation of time series. As revealed by splitting and cross-validation analysis of sequence data in different time periods, this model has shown excellent performance. Specifically, satisfactory results have been achieved in the error indicators of root mean square error R M , mean absolute percentage error M A , normalized root mean square error N R and mean absolute deviation M D . It shows good performance. Based on the above experimental results and in-depth analysis, the effectiveness of the proposed IAVOA-SVR model for temperature prediction of actual wind-tunnel test data has been proved.

5. Conclusions

Based on the actual collected wind-tunnel test data, this paper constructs an intelligent prediction model for the average temperature at the outlet of the wind-tunnel test section. In view of the data characteristics such as nonlinearity, timeliness and chaos existing in the wind-tunnel test data, the KPCA and PSR methods were combined to preprocess the test data. The processed test data provide strong data support for the subsequent construction of the temperature prediction model. However, the traditional wind-tunnel temperature prediction model belongs to the inherent parametric model, which has problems such as insufficient prediction efficiency, accuracy and excessive reliance on human experience. To address this issue, this paper proposes a combination of data-driven intelligent optimization algorithms and machine-learning models to construct the IAVOA-SVR intelligent wind-tunnel temperature prediction model. By aiming to solve problems such as being prone to falling into local optimum and slow convergence speed in traditional AVOA, the structure of AVOA was improved by introducing adaptive chaotic mapping and local search enhancement strategies, and the IAVOA algorithm is proposed. The test experiments of the benchmark functions show that the IAVOA algorithm has a faster iteration speed and better convergence performance. By aiming to solve the problems such as difficult parameter adjustment and the unstable model of SVR, the parameter information of SVR was optimized through IAVOA, and the IAVOA-SVR temperature prediction model was constructed. Comparative analysis was conducted with the common SVR model, PSO-SVR model and GWO-SVR model in two sets of wind-tunnel test data. The results show that in the IAVOA-SVR model proposed in this paper, the root mean square errors of data 1 and data 2 are 0.5305 and 0.5061, respectively, and the determination coefficients are 0.9956 and 0.9983, respectively. This indicates that it has the highest prediction accuracy and the best model error-evaluation indicators. The cross-validation of the time series of the two sets of data indicates the applicability of the IAVOA-SVR model.
In this paper, the African Vulture Optimization Algorithm was improved and optimized through the deep integration of chaos theory and dynamic control strategy. Combined with support vector regression and applied to the actual data prediction of the temperature field in the superb wind tunnel, the research results show that the algorithm proposed in this paper can accurately predict the average temperature at the outlet of the wind-tunnel flow field. The algorithm proposed in this paper has higher prediction accuracy and better error-evaluation indicators compared to the particle swarm algorithm and the grey wolf optimization algorithm, and the proposed temperature prediction model can be combined with the window-sliding mechanism or other theories to construct an online learning model with subsequent development. This not only solves the control problem of the temperature field in the wind tunnel; it also provides an effective solution for the intelligent optimization of complex engineering systems.

Author Contributions

Conceptualization, B.W. and Q.L.; methodology, L.S. and X.C.; software, L.S. and X.C.; validation, X.C.; formal analysis, X.C.; investigation, B.W., Q.L. and J.G.; resources, B.W. and J.G.; data curation, Q.L. and J.G.; writing—original draft preparation, L.S. and X.C.; writing—review and editing, L.S. and X.C.; visualization, X.C.; supervision, B.W. and Q.L.; project administration, L.S.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project: Numerical Control System Application Demonstration, grant number: Zong20240393.

Data Availability Statement

The datasets are from actual wind-tunnel tests conducted in the Aerodynamics Research Institute, the Aviation Industry Corporation of China.

Acknowledgments

We would like to express our appreciation for the support provided by the research community, which has been instrumental in facilitating this work.

Conflicts of Interest

Authors Biling Wang, Qiang Liand and Jin Guo were employed by the Aerodynamics Research Institute, The Aviation Industry Corporation of China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yule, G.U. On a Method of Investigating Periodicities in Distributed Series, with special reference to Wolfer’s Sunspot Numbers. Phil. Trans. R. Soc. Lond. A 1927, 226, 267–298. [Google Scholar]
  2. Winters, P.R. Forecasting sales by exponentially weighted moving averages. Manag. Sci. 1960, 6, 324–342. [Google Scholar]
  3. Tsay, R.S. Testing and modeling threshold autoregressive processes. J. Am. Stat. Assoc. 1989, 84, 231–240. [Google Scholar]
  4. Jones, S.; Poongodi, S.; Binu, L. Stabilizing controller using backstepping for steady mass flow in a wind tunnel. In Proceedings of the IEEE Recent Advances in Intelligent Computational Systems, Trivandrum, India, 22–24 September 2011; pp. 735–740. [Google Scholar]
  5. Castán-Lascorz, M.A.; Jiménez-Herrera, P.; Troncoso, A.; Asencio-Cortés, G. A new hybrid method for predicting univariate and multivariate time series based on pattern forecasting. Inf. Sci. 2022, 586, 611–627. [Google Scholar]
  6. Zhang, X.; Zhang, P.; Peng, B.; Xian, Y. Prediction of icing wind tunnel temperature field with machine learning. J. Exp. Fluid Mech. 2022, 36, 8–15. [Google Scholar]
  7. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar]
  8. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  9. Lambora, A.; Gupta, K.; Chopra, K. Genetic algorithm-A literature review. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing, Faridabad, India, 14–16 February 2019; pp. 380–384. [Google Scholar]
  10. Price, K.V. Differential evolution. In Handbook of Optimization: From Classical to Modern Approach; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar]
  11. Hansen, N.; Arnold, D.V.; Auger, A. Evolution strategies. In Springer Handbook of Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2015; pp. 871–898. [Google Scholar]
  12. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 311–351. [Google Scholar]
  13. Jain, N.K.; Nangia, U.; Jain, J. A review of particle swarm optimization. J. Inst. Eng. Ser. B 2018, 99, 407–411. [Google Scholar] [CrossRef]
  14. Kıran, M.S.; Fındık, O. A directed artificial bee colony algorithm. Appl. Soft Comput. 2015, 26, 454–462. [Google Scholar] [CrossRef]
  15. Zhu, Q.; Zhu, M.; Cui, Q.; Wang, J.; Gu, Z. Research on Partial Discharge Pattern Recognition of Transformer Bushings Based on KPCA and GTO-SVM Algorithm. In Proceedings of the 6th International Conference on Power and Energy Technology, Beijing, China, 12–15 July 2024. [Google Scholar]
  16. Li, W.; Yue, H.; Valle-Cervantes, S.; Qin, S.J. Recursive PCA for adaptive process monitoring. J. Process Control 2000, 10, 471–486. [Google Scholar] [CrossRef]
  17. Yuan, G.; Ma, Y.; Tan, F.; Zhang, Z. Research on Transformer Fault Diagnosis Based on Improved INGO Optimization LSSVM. In Proceedings of the International Conference on Electrical Engineering and Control Science, Hangzhou, China, 29–31 December 2023. [Google Scholar]
  18. Zhou, Z. Machine Learning; Tsinghua University Press: Beijing, China, 2016; pp. 125–128. [Google Scholar]
  19. Vapnik, V.N. Pattern recognition using generalized portrait method. Autom. Remote Control 1963, 24, 774–780. [Google Scholar]
  20. Mousaei, A.; Naderi, Y.; Bayram, I.S. Advancing State of Charge Management in Electric Vehicles with Machine Learning: A Technological Review. IEEE Access 2024, 12, 43255–43283. [Google Scholar] [CrossRef]
  21. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 58, 107–408. [Google Scholar] [CrossRef]
  22. Kumar, C.; Mary, D.M. Parameter estimation of three diode solar photovoltaic model using an Improved African Vultures optimization algorithm with Newton-Raphson method. J. Comput. Electron. 2021, 20, 2563–2593. [Google Scholar] [CrossRef]
  23. Kuang, X.; Hou, J.; Liu, X.; Lin, C.; Wang, Z.; Wang, T. Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy. Electronics 2024, 13, 3329. [Google Scholar] [CrossRef]
  24. Nebro, A.J.; Durillo, J.; Garcia-Nieto, J.; Coello Coello, C.A.; Luna, F.; Alba, E. SMPSO: Anew PSO-Based Metaheuristic for Multi-Objective Optimization. In Proceedings of the IEEE Symposium on Computational Intelligence in Multicriteria Decision Making, Nashville, TN, USA, 30 March–2 April 2009; pp. 66–73. [Google Scholar]
  25. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  26. Mousaei, A.; Naderi, Y. Predicting Optimal Placement of Electric Vehicle Charge Stations Using Machine Learning: A Case Study in Glasgow. In Proceedings of the Iranian Conference on Renewable Energies and Distributed Generation, Qom, Iran, 26 February 2025; Volume 14, pp. 1–7. [Google Scholar]
  27. Gao, C.; Ji, W.; Wang, J.; Zhu, X.; Liu, C.; Yin, Z.; Huang, P.; Yu, L. Real-Time prediction of pool fire burning rates under complex heat transfer effects influenced by ullage height: A comparative study of BPNN and SVR. Therm. Sci. Eng. Prog. 2024, 56, 103060. [Google Scholar] [CrossRef]
  28. Chen, Z.; Mu, H.; Liao, X.; Ouyang, H.; Huang, D.; Lu, J.; Chen, D. Multiobjective Optimization of the Difficult-to-Machine Material TC18 Based on AVOA-SVR and MOAVOA. JOM 2025, 77, 1727–1745. [Google Scholar] [CrossRef]
  29. Zhou, H.; Zhu, J.; Liu, Y. Transformer oil temperature prediction based on PSO-SVR. In Proceedings of the IEEE International Conference on Civil Aviation Safety and Information Technology, Hangzhou, China, 23–25 October 2024; pp. 1286–1290. [Google Scholar]
  30. Jin, H.; Hu, Y.; Ge, H.; Hao, Z.; Zeng, Z.; Tang, Z. Remaining useful life prediction for lithium-ion batteries based on an improved GWO–SVR algorithm. Chin. J. Eng. 2024, 46, 514–524. [Google Scholar]
Figure 1. Logistic mapping (a) chaotic sequence of Logistic mapping; (b) chaotic value distribution.
Figure 1. Logistic mapping (a) chaotic sequence of Logistic mapping; (b) chaotic value distribution.
Processes 13 01956 g001
Figure 2. The flowchart of the improved optimization algorithm for African vultures.
Figure 2. The flowchart of the improved optimization algorithm for African vultures.
Processes 13 01956 g002
Figure 3. Test the iterative curve of the function. (aj) is f 1 f 10 .
Figure 3. Test the iterative curve of the function. (aj) is f 1 f 10 .
Processes 13 01956 g003aProcesses 13 01956 g003b
Figure 4. IAVOA-SVR schematic diagrams chart.
Figure 4. IAVOA-SVR schematic diagrams chart.
Processes 13 01956 g004
Figure 5. IAVOA-SVR flow chart.
Figure 5. IAVOA-SVR flow chart.
Processes 13 01956 g005
Figure 6. Phase-space reconstruction index (a) data 1; (b) data 2.
Figure 6. Phase-space reconstruction index (a) data 1; (b) data 2.
Processes 13 01956 g006
Figure 7. Fitness function iteration curve (a) data 1; (b) data 2.
Figure 7. Fitness function iteration curve (a) data 1; (b) data 2.
Processes 13 01956 g007
Figure 8. Predicted and actual values (a) data 1; (b) data 2.
Figure 8. Predicted and actual values (a) data 1; (b) data 2.
Processes 13 01956 g008
Figure 9. Deviation (a) data 1; (b) data 2.
Figure 9. Deviation (a) data 1; (b) data 2.
Processes 13 01956 g009
Figure 10. Model validation data set partitioning (a) data 1; (b) data 2.
Figure 10. Model validation data set partitioning (a) data 1; (b) data 2.
Processes 13 01956 g010
Figure 11. True and predicted value of data 1 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Figure 11. True and predicted value of data 1 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Processes 13 01956 g011
Figure 12. Deviation of data 1 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Figure 12. Deviation of data 1 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Processes 13 01956 g012
Figure 13. True and predicted value of data 2 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Figure 13. True and predicted value of data 2 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Processes 13 01956 g013
Figure 14. Deviation of data 2 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Figure 14. Deviation of data 2 (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4.
Processes 13 01956 g014aProcesses 13 01956 g014b
Table 1. Ten sets of benchmark test functions.
Table 1. Ten sets of benchmark test functions.
Test FunctionDimensionalitySearch RangeOptimum Value
f 1 x = i = 1 d x i 2 20[−100,100]0
f 2 x = i = 1 d i x i 2 20[−100,100]0
f 3 x = i = 1 d x i 2 + i = 1 d 0.5 i x i 2 2 + i = 1 d 0.5 i x i 2 4 20[−10,10]0
f 4 x = i = 1 d j = 1 i x j 2 20[−100,100]0
f 5 x = x 1 2 + 10 6 i = 2 d x i 2 20[−100,100]0
f 6 x = 10 d + i = 1 d x i 2 10 cos 2 π x i 20[−5.2,5.2]0
f 7 x = 20 exp 0.2 1 d i = 1 d x i 2 exp 1 d i = 1 d cos 2 π x i 20[−32,32]0
f 8 x = 1 4000 i = 1 d x i 2 i = 1 d cos x i i + 1 20[−600,600]0
f 9 x = i = 1 d 1 100 x i + 1 x i 2 2 + 1 x i 2 20[−30,30]0
f 10 x = 418.9829 d i = 1 d x i sin x i 20[−500,500]0
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameter
GWO [25] a : 2 decrease to 0
PSO [24] c 1 = c 2 = 1.49 ,   ω = 0.8
AVOA [21] p 1 = p 2 = p 3 = 0.5 ,   α = 0.8 ,   β = 0.2 ,   ω = 2.5
IAVOA p 1 = p 2 = p 3 = 0.5 ,   α = 0.8 ,   β = 0.2 ,   ω = 2.5
Table 3. Test values of basic functions.
Table 3. Test values of basic functions.
Basic FunctionAlgorithmMean ValueStandardMaximumMinimum
f 1 AVOA [21]2.27 × 10−280.000.002.37 × 10−31
GWO [25]1.17 × 10−74.06 × 10−71.40 × 10−82.12 × 10−7
PSO [24]4.87 × 10−16.18 × 10−11.88 × 10−13.16 × 10−1
IAVOA2.09 × 10−300.000.001.74 × 10−31
f 2 AVOA [21]2.16 × 10−290.000.006.48 × 10−29
GWO [25]4.75 × 10−79.67 × 10−76.30 × 10−83.75 × 10−7
PSO [24]2.66 × 1008.27 × 1003.22 × 10−14.00 × 100
IAVOA1.45 × 10−310.000.004.36 × 10−30
f 3 AVOA [21]2.80 × 10−160.003.83 × 10−278.41 × 10−16
GWO [25]1.97 × 10−19.49 × 10−93.06 × 10−215.21 × 10−8
PSO [24]1.72 × 1009.79 × 1031.89 × 1034.69 × 104
IAVOA1.38 × 10−190.003.47 × 10−264.15 × 10−18
f 4 AVOA [21]1.53 × 10−230.004.94 × 10−324.60 × 10−23
GWO [25]2.69 × 10−21.44 × 10−216.91 × 10−37.93 × 10−2
PSO [24]2.01 × 10302.80 × 1039.71 × 10−11.01 × 104
IAVOA5.18 × 10−240.005.17 × 10−311.54 × 10−24
f 5 AVOA [21]3.49 × 10−300.000.001.04 × 10−30
GWO [25]2.92 × 10−76.84 × 10−731.33 × 10−773.10 × 10−7
PSO [24]4.00 × 1034.98 × 1032.75 × 10−61.00 × 104
IAVOA1.62 × 10−280.000.004.87 × 10−28
f 6 AVOA [21]0.000.000.000.00
GWO [25]9.99 × 10−11.78 × 100.005.94 × 100
PSO [24]5.39 × 1013.65 × 1012.38 × 1012.22 × 102
IAVOA0.000.000.000.00
f 7 AVOA [21]8.88 × 10−160.008.88 × 10−168.88 × 10−16
GWO [25]2.06 × 1019.89 × 10−22.04 × 1012.08 × 101
PSO [24]2.00 × 1018.96 × 10−22.00 × 1012.04 × 101
IAVOA6.88 × 10−160.008.88 × 10−168.88 × 10−16
f 8 AVOA [21]0.000.000.000.000
GWO [25]4.71 × 10−31.11 × 10−20.005.15 × 10−2
PSO [24]2.14 × 10−21.83 × 10−24.62 × 10−136.14 × 10−2
IAVOA0.000.000.000.00
f 9 AVOA [21]1.15 × 10−59.55 × 10−65.60 × 10−73.67 × 10−5
GWO [25]1.70 × 1014.90 × 10−11.61 × 1011.80 × 101
PSO [24]3.63 × 1031.63 × 1043.61 × 1009.00 × 104
IAVOA2.10 × 10−51.76 × 10−51.63 × 10−66.08 × 10−5
f 10 AVOA [21]3.04 × 1024.81 × 1022.54 × 10−41.81 × 103
GWO [25]4.05 × 1035.65 × 1023.28 × 1035.53 × 103
PSO [24]2.14 × 1034.96 × 1028.30 × 1023.01 × 103
IAVOA1.02 × 1022.84 × 1022.54 × 10−41.42 × 103
Table 4. Steps of the IAVOA-SVR algorithm.
Table 4. Steps of the IAVOA-SVR algorithm.
StepSteps of the IAVOA-SVR Algorithm
Step1Carry out the kernel principal component analysis and phase-space reconstruction processing on the wind-tunnel test data
Step2Construct the SVR temperature prediction model and take the parameters of SVR to be optimized as the input of IAVOA
Step3Initialize parameters such as the population size and the number of iterations of IAVOA
Step4Establish the optimization objective function and take the SVR temperature-training model error as the IAVOA fitness function
Step5Update the SVR parameters as the number of iterations increases
Step6Update the record of the optimal-fitness function and the optimal-parameter combination
Step7Determine whether the end condition (maximum number of iterations) is met. If it is not completed, return to the fifth step
Step8Output the optimal fitness result and the optimal parameter combination
Table 5. Data 1 error evaluation index of different models.
Table 5. Data 1 error evaluation index of different models.
Data 1SVR [27]AVOA-SVR [28]PSO-SVR [29]GWO-SVR [30]IAVOA-SVR
R M 3.45300.70401.81801.44270.5305
M A 4.40 × 10−37.99 × 10−40.00160.00175.91 × 10−4
N R 0.37160.07580.19560.15530.0571
M D 3.40270.61391.2591.33860.4542
R 2 0.86180.97470.95120.97570.9956
Table 6. Data 2 error evaluation index of different models.
Table 6. Data 2 error evaluation index of different models.
Data 2SVR [27]AVOA-SVR [28]PSO-SVR [29]GWO-SVR [30]IAVOA-SVR
R M 4.21890.80932.36030.64170.5061
M A 0.00538.63 × 10−40.00306.15 × 10−44.53 × 10−4
N R 0.35220.06760.19700.05360.0422
M D 4.15940.67932.32470.48350.3575
R 2 0.87590.98770.97810.98000.9983
Table 7. Data 1 error evaluation index of different data groups.
Table 7. Data 1 error evaluation index of different data groups.
Data 1Group 1Group 2Group 3Group 4
R M 1.27780.32220.27060.9739
M A 1.30 × 10−33.25 × 10−42.77 × 10−49.74 × 10−4
N R 0.05870.26240.30920.1018
M D 1.05570.25272.16 × 10−10.7425
Table 8. Data 2 error evaluation index of different data groups.
Table 8. Data 2 error evaluation index of different data groups.
Data 2Group 1Group 2Group 3Group 4
R M 0.37640.16760.54750.1535
M A 3.02 × 10−41.76 × 10−46.07 × 10−41.55 × 10−4
N R 0.03440.06620.05430.0519
M D 0.24371.41 × 10−10.48110.1200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, L.; Cui, X.; Wang, B.; Li, Q.; Guo, J. Improved African Vulture Optimization Algorithm for Optimizing Nonlinear Regression in Wind-Tunnel-Test Temperature Prediction. Processes 2025, 13, 1956. https://doi.org/10.3390/pr13071956

AMA Style

Shen L, Cui X, Wang B, Li Q, Guo J. Improved African Vulture Optimization Algorithm for Optimizing Nonlinear Regression in Wind-Tunnel-Test Temperature Prediction. Processes. 2025; 13(7):1956. https://doi.org/10.3390/pr13071956

Chicago/Turabian Style

Shen, Lihua, Xu Cui, Biling Wang, Qiang Li, and Jin Guo. 2025. "Improved African Vulture Optimization Algorithm for Optimizing Nonlinear Regression in Wind-Tunnel-Test Temperature Prediction" Processes 13, no. 7: 1956. https://doi.org/10.3390/pr13071956

APA Style

Shen, L., Cui, X., Wang, B., Li, Q., & Guo, J. (2025). Improved African Vulture Optimization Algorithm for Optimizing Nonlinear Regression in Wind-Tunnel-Test Temperature Prediction. Processes, 13(7), 1956. https://doi.org/10.3390/pr13071956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop