Next Article in Journal
Hybrid Hourly Solar Energy Forecasting Using BiLSTM Networks with Attention Mechanism, General Type-2 Fuzzy Logic Approach: A Comparative Study of Seasonal Variability in Lithuania
Previous Article in Journal
Radar Technologies in Motion-Adaptive Cancer Radiotherapy
Previous Article in Special Issue
Fatigue Behaviour of Metallic Materials Under Hydrogen Environment: Historical Perspectives, Recent Developments, and Future Prospects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network

1
Shenyang Aircraft Design and Research Institute, Shenyang 110035, China
2
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9671; https://doi.org/10.3390/app15179671 (registering DOI)
Submission received: 30 July 2025 / Revised: 26 August 2025 / Accepted: 1 September 2025 / Published: 2 September 2025
(This article belongs to the Special Issue Data-Enhanced Engineering Structural Integrity Assessment and Design)

Abstract

As the complexity and functional integration of mechanism systems continue to increase in modern practical engineering, the challenges of changing environmental conditions and extreme working conditions are becoming increasingly severe. Traditional uncertainty-based design optimization (UBDO) has exposed problems of low efficiency and slow convergence when dealing with nonlinear, high-dimensional, and strongly coupled problems. In response to these issues, this paper proposes an UBDO framework that integrates an efficient intelligent optimization algorithm with an excellent surrogate model. By fusing butterfly search with Levy flight optimization, an improved chicken swarm algorithm is introduced, aiming to address the imbalance between global exploitation and local exploration capabilities in the original algorithm. Additionally, Bayesian optimization is employed to fit the limit-state evaluation function using a BP neural network, with the objective of reducing the high computational costs associated with uncertainty analysis through repeated limit-state evaluations in uncertainty-based optimization. Finally, a decoupled optimization framework is adopted to integrate uncertainty analysis with design optimization, enhancing global optimization capabilities under uncertainty and addressing challenges associated with results that lack sufficient accuracy or reliability to meet design requirements. Based on the results from engineering case studies, the proposed UBDO framework demonstrates notable effectiveness and superiority.

1. Introduction

With the continuous advancement of science and technology, the complexity of engineering structures has further increased, posing more stringent requirements for structural design under variable environments and extreme operating conditions [1,2,3]. Traditional design methods are difficult to meet the comprehensive requirements of high precision, high efficiency, and high reliability for complex systems in today’s engineering [4,5]. The development of advanced intelligent design optimization methods and intelligent computing technology has provided new development ideas for the research on uncertainty-based design optimization (UBDO) of structures [6,7,8]. Establishing efficient and accurate reliability analysis models, as well as developing intelligent optimization algorithms with enhanced global convergence capabilities, has gradually become a powerful support for improving the safety of modern mechanical structures and reducing design risks [9,10,11].
In recent years, a variety of excellent UBDO methods have been developed and applied across different engineering fields. Meng et al. [12] proposed an optimization framework considering uncertainties, composed of four nested optimization loops, employing two improved gradient-based methods for optimization. However, while gradient-based optimization methods exhibit certain advantages, the performance and generalization capabilities of these methods may be influenced by the iteration step size and direction. Mierlo et al. [13] introduced a robust optimization method considering uncertainties based on Gaussian process surrogates, achieving superior optimization convergence results with the assistance of novel learning functions. Farahmand et al. [14] presented a solution approach for problems based on enhanced metaheuristic algorithms. Experimental results demonstrate that the flexibility, efficiency, and ability to effectively explore the entire search space inherent in metaheuristic algorithms play a crucial auxiliary role in enhancing the reliability of optimization designs, highlighting the application potential of metaheuristic algorithms in UBDO. From these studies, it can be observed that metaheuristic algorithms exhibit significant advantages over gradient-based optimization methods. Furthermore, there remains a high demand for more advanced metaheuristic algorithms.
In the related research on UBDO of various engineering structures, intelligent optimization algorithms have gradually become a common optimization method in the field of engineering structure optimization due to their powerful nonlinear processing capabilities and global search advantages [15,16,17]. These algorithms solve complex optimization problems in engineering by simulating biological behaviors or physical phenomena in nature [18]. Particle swarm optimization (PSO) algorithms [19], Genetic algorithms (GA) [20], ant colony algorithms [21] and Butterfly Optimization Algorithms (BOA) [22] are commonly used by researchers. In recent years, scholars have conducted innovative research and applications of intelligent optimization algorithms in related fields. Meng et al. [23] proposed a support vector regression modeling strategy based on intelligent optimization for fatigue problems of offshore wind turbine support structures, which was used to build a high-precision approximate model. Ghasemi et al. [24] simulated the population growth and spread of IVY plants, proposing a novel bionic algorithm variant, the IVY Algorithm (IVYA). In the comparison of 26 classic test functions, the IVYA proved its effectiveness compared with other optimization algorithms. Zhu et al. [25] proposed a path planning algorithm inspired by insect behavior, which enhances the algorithm’s performance by incorporating quantum computing methods and a hybrid strategy of multiple tactics. To improve the algorithm’s global optimization capability, a merit set strategy was adopted to ensure a more uniform distribution of the initial insect population. The study employed test functions and practical engineering case examples to validate and compare the proposed algorithm, demonstrating its superiority in terms of convergence speed and accuracy. Inspired by various natural behaviors of a certain bird species, Wang et al. [26] proposed another metaheuristic algorithm. To achieve a favorable balance between exploring global solutions and exploiting local information, the algorithm also incorporates the Cauchy mutation strategy and the Leader strategy. Previous studies indicate that intelligent optimization algorithms exhibit strong global search capabilities and adaptability, showing promising applications in solving UBDO problems of engineering structures [27,28]. However, existing UBDO approaches struggle to strike a dynamic balance between computational efficiency and robustness, and the employed metaheuristic algorithms may exhibit performance deficiencies when confronted with diverse problem types. Consequently, when dealing with increasingly complex engineering structures, there remains an urgent need for metaheuristic algorithms capable of efficiently and robustly handling problems with varying degrees of nonlinearity.
Intelligent optimization algorithms can improve the accuracy of the UBDO process for engineering structures [29,30]. However, in the case of high-dimensional variable space, high computational complexity, and difficulty in obtaining simulation data, the UBDO process faces extremely high computational costs [31,32,33]. To solve such problems, researchers have proposed a method to introduce a surrogate model to replace traditional numerical simulation, thereby significantly improving optimization efficiency and accuracy [34,35]. Yu et al. [36] proposed an efficient UBDO method for monopile-supported offshore wind turbine structures. To enhance computational efficiency, a surrogate model was constructed using the radial basis function neural network. Allahvirdizadeh et al. [37] used adaptive training of the Kriging metamodel as a surrogate model for the computational model of the decoupled UBDO problem to reduce the computational cost. At the same time, a new stopping criterion was proposed to reduce the generalization error of the trained metamodel, thereby reducing the estimated failure probability. Fu et al. [38] investigated a hierarchical hybrid sequential optimization and reliability assessment strategy under multi-source uncertainties. Building upon intelligent optimization algorithms, they proposed an adaptive collaborative optimization approach, thereby extending and refining the theoretical framework of UBDO under multi-source uncertainties. From the UBDO methods applied to related engineering structures studied by previous researchers, various surrogate models can significantly reduce the computational cost and have great room for development in practical engineering applications [39,40,41]. In addition, various advanced models and technologies are also widely used in the safety and reliability assessment of actual engineering structures [42,43,44]. The surrogate model of Bayesian Optimization Back Propagation (BO-BP) neural network has attracted wide attention in reliability analysis due to its powerful nonlinear fitting ability and uncertainty quantification advantages [45,46]. However, as engineering structures gradually become more complex, the development of intelligent optimization algorithms with enhanced global convergence capabilities and the establishment of efficient and accurate reliability analysis models have increasingly become powerful tools for improving the safety of modern mechanical structures and reducing design risks.
Nevertheless, traditional UBDO methods still suffer from issues of low efficiency, slow convergence, and insufficient accuracy when dealing with nonlinear, high-dimensional, and highly coupled problems. Furthermore, existing UBDO research still lacks surrogate models capable of balancing high fitting accuracy with computational efficiency, making it difficult to meet the design requirements of complex engineering structures in terms of high precision, high efficiency, and high reliability. To address these challenges, this study proposes a novel UBDO method that integrates an improved Chicken Swarm Optimization (CSO) algorithm with a BO-BP neural network. This approach has achieved promising results in addressing optimization problems for engineering structures. By introducing the butterfly optimization adaptive mutation and local search strategy, the convergence stability and optimization accuracy of the improved optimization algorithm (BLCSO) have been effectively enhanced. Through improvements to the optimization algorithm, this study aims to reduce the computational difficulty of obtaining the optimal solution for the UBDO problem and enhance the accuracy of the optimal solution. Addressing the computational efficiency issues that may arise during the optimization process using metaheuristic algorithms—due to multiple calls to complex nonlinear constraint functions—this research proposes the BO-BP model. By leveraging BO, the number of iterations required to establish a surrogate model is reduced, while improving the model’s fitting accuracy. Ultimately, a comprehensive UBDO framework is constructed by integrating the BLCSO algorithm with the BO-BP model, and the effectiveness and efficiency of the proposed framework are validated through applications to complex engineering problems.
The contents of other parts of this study are as follows: Section 2 introduces the optimization algorithms CSO and BOA involved in this study, and also provides a brief overview of the BP neural network model. Section 3 elaborates in detail on the computational steps of the BLCSO algorithm and the UBDO algorithm (BLCSO-BO-BP) that integrates BLCSO with the BO-BP model. Section 4 evaluates the BLCSO algorithm and further tests the proposed BLCSO-BO-BP using engineering examples. Section 5 offers a concise summary of the research conducted in this study.

2. Basic Methods and Models

2.1. Optimization Algorithm

2.1.1. Chicken Swarm Optimization Algorithm

The CSO algorithm is a type of bionic intelligent algorithm, and its design is inspired by the foraging behavior and social hierarchy characteristics of poultry groups [47]. This algorithm divides the chicken swarm into multiple subgroups, with each subgroup following a composition pattern of “1 rooster + N hens + M chicks”. Here, the values of N and M are dynamically adjusted based on the total swarm size and the preset ratio parameter of hens to chicks. Within the swarm, there exists a hierarchical learning mechanism, where higher-ranking individuals exert behavioral guidance on lower-ranking ones, while collaborative and competitive relationships exist among individuals of the same rank. Additionally, competitive dynamics are also present among different subgroups. This dual-competition mechanism effectively enhances the algorithm’s global search capability.
The position update formula for roosters in the chicken swarm is as follows:
P i , j T + 1 = P i , j T + r a n d n 0 , σ 2
σ 2 = 1 i f   f i f k ; exp f k f i f i + ε i f   f k < f i .
where P i , j T represents the location coordinate of individual i in the j dimension at the T -th iteration; r a n d n 0 , σ 2 denotes a Gaussian distribution function with a mean of 0 and a standard deviation of σ 2 ; k represents the number of any rooster individual, and k i ; f i denotes individual i’s fitness value; f k denotes the fitness value of the individual, and ε denotes an infinitesimal positive number to prevent the denominator from taking a value of 0.
The position update of hens follows the formula indicated as Equations (3)–(5).
P i , j T + 1 = P i , j T + C 1 × r a n d n P u , j T P i , j T + C 2 × r a n d n P v , j T P i , j T
C 1 = exp f i f u f i + ε .
C 2 = exp f v f i .
where r a n d n means a random number uniformly distributed between [0, 1]; u denotes the individual number of the mate rooster of the i -th hen, v is the individual number of any randomly selected rooster or hen, and u v ; C 1 serves as the influence factor associated with its partner rooster, and C 2 functions as the influence factor related to other roosters or hens. f u denotes individual u ’s fitness value, f v denotes individual v ’s fitness value.
The position update formula for chicks is described as follows:
P i , j T + 1 = P i , j T + F L × P m , j T P i , j T
where P m , j T represents the location of the mother of the i -th chick in the j dimension at the T -th iteration. The chick’s position is associated with both its own position and the position of the mother hen. The learning factor F L 0.4 , 1 is the degree to which the chick is influenced by its mother hen. The value of F L is obtained through random sampling.
At the initial stage of the algorithm, the individuals in the population are initialized as follows:
P i k = l b k + u b k l b k × r a n d o m 0 , 1
where i denotes the i -th chicken in all of the population, i 1 , M , k represents the k -th dimension of the population, k 1 , D , P i k signifies the individuals within the k -th dimension of the i -th chicken. r a n d o m 0 , 1 signifies a random number in the interval 0 , 1 to ensure that the variable values stay within their specified range. u b k and l b k signifies the upper and lower boundaries of the k -th dimension variable, respectively. Subsequently, the fitness value is computed based on the objective function, and the individual possessing the best fitness along with its optimal fitness value is retained.
The pertinent fundamental steps are outlined as follows:
Step 1: Population initialization to initiate the algorithm;
Step 2: The fitness value of each individual is evaluated, followed by sorting them. Subsequently, the roles of roosters, hens, mother hens, and chicks are determined, and the variable T is initialized to 1;
Step 3: In the event that mod T , G = 1 , update the population hierarchy;
Step 4: The positions of the rooster, hen, and chicks are updated in accordance with the individual update formula;
Step 5: Compute the fitness values of all individuals within the population, update the globally optimal individual, and increment T by 1;
Step 6: In the event that the algorithm termination condition is fulfilled, the iteration ceases, and the optimal solution is generated as the output. Otherwise, advance to Step 3.

2.1.2. Butterfly Optimization Algorithm

Inspired by the behaviors of butterflies in nature, Sankalap Arora and Satvir Singh [48] proposed the BOA in 2019, which simulates the foraging and mating behaviors of butterflies for optimization search. This algorithm uses “scent” to simulate fitness values, guiding individuals to move within the solution space through two modes: global search and local search. Specifically, individuals perform rapid directional movement (global search) when perceiving distant signals, and otherwise engage in random movement (local search).
Within the BOA, the scent emitted by the butterfly is denoted by I , and f B O A is associated with the variation in fitness. When a butterfly possessing a relatively weaker I moves towards a butterfly with a stronger I , the value of f B O A escalates at a faster rate than I . Consequently, the scent is expressed as a function of the intensity of the stimulus:
f B O A = c × I a
where c is the sensory factor, which refers to taste. Parameter a is a power index that depends on f B O A and controls the behavior of the algorithm. Where a 0 , 1 , c 0 , . When a = 0 , it implies that the fragrance released by any butterfly is imperceptible to other butterflies. When a = 1 , it implies that the fragrance concentration dispersed by a butterfly is discernible to other butterflies possessing an equivalent ability.
The BOA is principally partitioned into three stages:
The first stage is the initial stage, including initializing the population, parameters, objective function, etc.
The second stage is the iteration stage. In each iteration, the butterflies will fly to a new position and recalculate their fitness values through the function;
Upon a butterfly’s perception of the scent released by another butterfly, it will migrate towards the source of the scent. This stage is formally termed the global search stage. The equation that characterizes the global search stage is presented below:
P i T + 1 = P i T + r B O A 2 × g P i T × f B O A i .
where P i T denotes the solution vector associated with the i -th butterfly during the T -th iteration. g represents the optimal solution within the present iteration. f B O A i represents the i -th butterfly’s scent, and r B O A is a randomly generated number between 0 and 1.
In an alternative scenario, if a butterfly is unable to detect the scent emitted by other individuals, it will execute a random movement. This stage is formally termed the local search stage. The mathematical expression for the local search stage is as follows:
P i T + 1 = P i T + r B O A 2 × P j T P k T × f B O A i
where P k T and P j T denote the k -th and j -th butterflies, which have been randomly selected from the solution space.
During the foraging and mating activities of butterflies, the transition between the global search and the local search is accomplished through a switching probability denoted as p . In every iteration step, a value r B O A is generated at random based on Equation (16). Then, r B O A is juxtaposed with the switch probability p to ascertain whether the ongoing search cons. In the original BOA, p is typically set to 0.8 [48].
r B O A = r a n d 0 , 1
The third stage is the termination stage. When the algorithm reaches the upper limit of iterations, it performs a comprehensive search and then outputs the current optimal solution, which is the best result obtained up to that point in the algorithm.

2.1.3. A Summary of the Strengths and Weaknesses of Original Optimization Algorithms

The CSO algorithm distinguishes itself from traditional evolutionary optimization methods (such as GA, immune optimization algorithms, differential evolution algorithms, etc.) in that its population iteration and update mechanism does not rely on biological operations like genetic inheritance or gene recombination to generate the next generation of the population. Consequently, it requires less memory and runtime, effectively reducing computational complexity. By fully leveraging the information shared among individuals with different roles within the swarm (roosters, hens, and chicks) during foraging, each type of individual employs distinct search strategies, enabling more efficient exploration in complex search spaces. Additionally, through the collaborative search mechanism among individuals with different roles, the algorithm facilitates cross-dimensional information sharing in complex optimization scenarios, ultimately converging towards the global optimum.
However, due to the relatively weak search capability of hens in the CSO algorithm, it is prone to being attracted by local optimal solutions. Consequently, this may result in insufficient search precision when the algorithm approaches the optimal solution. Additionally, the random mutation process in the CSO algorithm is blind and fails to effectively enhance population diversity. For individuals with high fitness, random mutation may disrupt their advantageous traits, leading to a decline in individual quality, population degradation, and reduced diversity. Therefore, enhancing the CSO algorithm’s ability to converge to the optimal solution as quickly as possible in the later stages of optimization, and improving the efficiency and accuracy of population mutation during the search process, have become significant research directions in this field.
The BOA utilizes the concept of “scent” to guide individuals in moving through the solution space. This mechanism enables the algorithm to swiftly locate regions with potential high-quality solutions during the global search phase. Furthermore, the relatively simple structure of the BOA facilitates its application and extension. However, due to the randomness inherent in its local search phase, the algorithm may exhibit slower convergence when addressing complex high-dimensional problems. When confronted with complex optimization challenges, although BOA can conduct local searches through random movements, this strategy may still lack an effective mechanism for escaping local optima compared to advanced optimization algorithms.
To this end, this study proposes an effective strategy that combines BOA and CSO. By leveraging BOA’s capability of rapid convergence towards the optimal solution in the later stages of iteration, it can compensate for the performance deficiencies of the CSO algorithm during its late-stage iterations. Additionally, since the CSO algorithm comprises multiple sub-populations with distinct roles, integrating it with BOA’s more efficient population search capability can further enhance the overall optimization performance.

2.2. BP Neural Network Model

BP neural network is a network structure that calculates forward, propagates errors backward and corrects them, thereby reducing the errors along the gradient direction. After multiple training and learning, BP neural network can obtain stable and reliable output results [49,50]. Currently, owing to the pro-found exploration of BP neural network theory and the swift advancement of electronic computers, this theory has found extensive application across various domains, including pattern recognition, intelligent robotics, biology, medicine, prediction and estimation, as well as risk assessment.
The principal architecture of the BP neural network comprises an input layer, a hidden layer, and an output layer, as illustrated in Figure 1. The input layer is composed of input units. The significance and quantity of these input units are ascertained based on the actual engineering scenario. For structural reliability analysis, the input units are generally random variables of the structure. The output layer is com-posed of the outcomes generated by the neural network. For structural reliability analysis, the output layer is generally a safety margin. The hidden layer lies between the input and output layers, acting as a link be-tween them. The quantity of layers and neurons within the hidden layer dictates the intricacy of the neural network. In the case of a neural network lacking a hidden layer, it is only capable of addressing straight-forward linear problems. For a neural network with 1–3 hidden layers, most nonlinear problems can be solved. The neural network’s calculation accuracy mainly relies on the neurons in the hidden layer. Nevertheless, there exists no credible research that can precisely ascertain the quantity of neurons within the hidden layer. Typically, the number of neurons in the hidden layer may be roughly estimated through the subsequent empirical formula. This study utilizes the BO algorithm to determine the ideal count of neurons in the hidden layer.
m = n + l + α m = log 2 n m = n l
where m represents the quantity of neurons within the hidden layer, n denotes the number of neurons in the input layer, l signifies the count of neurons in the output layer, and α is a constant value falling within the range of 1 to 10.
The BP neural network primarily establishes the internal rules governing the relationship between input features and output features by means of two sequential processes: forward propagation and backward propagation. The BP neural network’s forward transmission multiplies the output of each neuron in the last layer by the matching inter-layer weight w i and then perform a linear operation to obtain the parameter Z 0 , as shown in Equation (13). Then, substitute the parameter Z 0 into the activation function f · , introduce nonlinearity, and finally pass the result to the next neuron, as shown in Figure 2.
Z 0 = w i x i + b 0
where Z 0 is the input to the neuron within this layer subsequent to the linear operation; w i stands for the i -th weight of the neuron in this layer; X i signifies the output value of the neuron in the prior layer, and this value is transmitted as the input of the neuron in this layer; b 0 is the neuron’s bias in this layer; f · is the activation function, which can be selected from Sigmoid, Tanh, ReLU and Leaky_ReLU according to the different functions of the neural network.
The backward propagation in the BP neural network denotes the procedure of fine-tuning the weights and biases of every neuron within the BP neural network by employing certain techniques (e.g., the gradient descent method). The objective is to diminish the discrepancy between the output value (the predicted value) and the actual value, consequently accomplishing the optimization of the computational accuracy of the BP neural network. The adjustment approach for weights and biases is illustrated in Equation (14).
w = w η L o s s w b = b η L o s s w
where b and w are the weights and biases before updating; b and w are the weights and biases after updating, respectively; η is the learning rate, which controls the step size of gradient descent; L o s s is the loss function.
In this context, we opt for the Mean Square Error, which exhibits strong adaptability, excellent stability, and rapid convergence speed, to serve as the loss function for the BP neural network:
L o s s = 1 n i = 1 n y i y ^ i 2
where y i is the actual value and y ^ i is the predicted value of the neural network.
For the activation function in the BP neural network, this paper chooses the Leaky ReLU function with high computational efficiency, fast convergence speed, no saturation mechanism, and no “dead ReLU” problem:
L e a k y   ReLU x = I x < 0 σ x + I x 0 x
where σ is a very small constant, which can be taken as 0.01.
The training process of BP neural network is essentially an optimization process, so it is particularly important to choose a suitable optimization algorithm. The conventional gradient descent algorithm exhibits inadequate optimization capability and is highly susceptible to being trapped in local optimal solutions. Therefore, this study uses the Adaptive Moment Estimation (Adam) algorithm instead of the gradient descent method. The Adam algorithm integrates the merits of the RMSProp method and the momentum algorithm. It not only uses the moving average of the gradient instead of the negative gradient direction as the update direction of the parameters, but also adaptively adjusts the learning rate for different parameters.

3. UBDO Based on BLCSO and BO-BP Neural Network Model

3.1. Improved CSO Based on Butterfly Optimization Strategy

3.1.1. Dynamic Multi-Population Partitioning Strategy

(1)
Rooster search strategy integrating butterfly algorithm
For the purpose of augmenting the global optimization performance of roosters, a dual-population crossover optimization strategy centered on roosters and butterflies was formulated. This strategy leverages the remarkable robustness and global search proficiency of the population in the BOA. This strategy uses a random number to determine the update between the two populations, that is, when the random number is less than R , the population is updated using Equation (9), otherwise it is updated using Equation (1). To examine the performance differences in the algorithm in solving complex optimization problems when the parameter R takes on various values, this study conducted a parameter sensitivity analysis, the details of which are included in Appendix A. The results of the sensitivity analysis indicate that when R = 0.7, the algorithm achieves a favorable balance between global and local search.
This fusion strategy can break the single optimization mode of roosters within the population, greatly enhance the global optimization ability of roosters, increase the randomness of algorithm optimization, and avoid getting trapped in local optimality.
During the process of updating the rooster position within the BLCSO framework, the CSO algorithm is amalgamated with the BOA. The fused rooster position update as:
P i T + 1 = P i T + r B O A 2 × g P i T × f B O A i , r a n d < R P i , j T + r a n d n 0 , σ , r a n d R
σ 2 = 1 , i f   f i f k ; exp f k f i f i + ε , i f   f k < f i .
(2)
Hen search strategy based on Levy flight
The term “Levy walk” was first proposed by French mathematician Paul Levy to simulate the process of organisms in nature looking for food in an uncertain environment. Levy flight is a mathematical method that provides random factors that obey the Levy distribution. Its essence is a long-term short-distance back-and-forth search interspersed with occasional long-distance searches. This feature can be used to form a Levy flight mechanism to ensure that the population fully explores the solution space and improves the global optimization ability.
The mathematical expression for the Levy flight is presented as follows:
s = μ v 1 β .
where s represents the flight step length. v ~ N 0 , 1 , μ ~ N 0 , σ μ 2 , variance σ μ 2 are:
σ μ = Γ 1 + β sin π β / 2 Γ 1 + β / 2 β × 2 β 1 / 2 1 / β .
where Γ is the function and β is the parameter, which is usually 1.5.
As can be seen from the aforementioned formula, the adopted Levy flight strategy adjusts step sizes by generating random numbers. Figure 3 illustrates the probability density function and random walk path of Levy flight when β = 1.5 . By comparing it with the Gaussian distribution, it is evident that Levy flight exhibits pronounced characteristics of a heavy-tailed stable distribution, with its random walk step sizes comprising numerous small steps interspersed with occasional large jumps.
Traditional random walks usually have only small step lengths and short distances of movement, with a relatively limited search range and unable to explore the search space extensively. In contrast, the Levy flight step size satisfies the heavy-tailed stable distribution and has the probability of long-range movement. This distribution method enables Levy flight to perform both detailed local searches and quickly explore new areas through long-distance jumps during the search process, thus a good balance between global and local searches is attained. Therefore, in the improved swarm optimization algorithm, the Levy flight strategy is employed for global detection, so that the swarm is widely distributed in the search space.
During the latter phase of algorithm iteration, hens exhibit a tendency to be trapped in local optimality. With the aim of enhancing the local optimization capability of the algorithm, a part of individuals is selected from the hens to learn from the optimal individuals, and the Levy flight strategy is incorporated for perturbation. The mathematical expression is shown in Equation (21).
P i , j T + 1 = P b e s t T + P b e s t T P i , j T × s
where P b e s t denotes the individual that possesses the highest fitness value within the population.
If i > N / 2 , the i -th individual is within the hen population occupies a position characterized by low fitness, and Equation (21) is utilized to perform the position update of the -th hen in every iteration. Where > N 2 . Levy flight can help the algorithm to effectively explore a large range of search space, while considering small-scale detailed search and occasional long-distance jumps, thereby preventing the hen from being trapped in the local optimal solution and enhancing its search capability. When the hen’s sense that the roosters have found a better food source, they will flock in in large numbers, making the population density around the roosters too high, making it easy to get stuck in a local optimum. In the original CSO algorithm, during the later stages of iteration, when hens perceive that roosters have discovered better food sources, they tend to flock in large numbers, which can easily lead to local optima. Therefore, in this study, by incorporating Levy flight step sizes into the hens’ position update formula and combining it with learning from the optimal individual, the influence of roosters on the search behavior of the hen population can be reduced to a certain extent, thereby expanding the search space of the entire population.
The BLCSO hen position update formula is:
P i , j T + 1 = P b e s t T + P b e s t T + P b e s t T × s , i > N 2 ; P i , j T + C 1 × r a n d n × P u , j T P i . j T + C 2 × r a n d n × P v , j T P i . j T i N 2 .
C 1 = exp f i f u f i + ε .
C 2 = exp f v f i .
(3)
The overall computational procedure of the BLCSO algorithm
The BLCSO algorithm can avoid the problem of traditional CSO algorithm falling into local optimality. First, the tent mapping of the initial population is initialized. Secondly, in the context of rooster position updating, the BOA and the CSO algorithm are amalgamated, and the BOA optimizer is utilized to achieve global optimization. Finally, considering the drawback that the CSO algorithm is susceptible to getting trapped in local optimality, a hen update strategy incorporating Levy flight is put forward to augment the algorithm’s global optimization capability.
Figure 4 displays the overall flow chart, and the following provides a clear breakdown of the specific steps:
Step 1: (Parameter setting): Configure the parameters of the BLCSO algorithm, which encompasses setting the population size denoted as N , the dimension of the search space labeled as D , the maximum iteration count represented by M , the quantities of roosters, hens, mother hens, and chicks, the sensing form c , the power index a , the update generation G , among others. Subsequently, initialize the population using the tent mapping chaotic map;
Step 2: (Initialization): Compute the fitness value of every individual within the flock, arrange them in ascending order, ascertain the roles of roosters, hens, chicks, and mother hens, and record the optimal position along with the corresponding optimal fitness value of each individual. Then, initialize the iteration counter by setting the number of iterations T = 1 ;
Step 3: (System update): Evaluate whether the quantity of iterations fulfills the criteria for the update of population roles. If so, update the role; otherwise, go to Step 4;
Step 4: (Position update of butterfly optimizer): Set R = 0.7 , generate a random number r a n d , if r a n d < R , use Equation (9) to update the individual’s position, otherwise use Equation (1) to search;
Step 5: (Improved hen position update and chick position update by integrating Levy flight): In the HN-th individual to i > N / 2 -th individual, update the hen position using Equation (21), for i > N / 2 + 1 to R N −1-th individual, use Equation (3) to update the hen individual position, and for the remaining R N individuals, use Equation (6) to update the chick position;
Step 6: (Population update of the T + 1 generation): Update the global optimal individual and recalculate the fitness values of individuals in the flock, record T = T + 1 ;
Step 7: (Terminate verification): Has the iteration process reached the stage where the maximum number of iterations has been exhausted? If the condition is met, proceed to output the optimal solution; otherwise, head back to Step 3. It should be noted that the algorithm’s optimization will cease iterating only when the maximum number of iterations is reached.

3.1.2. Complexity Calculation of the BLCSO Algorithm

Time complexity can be measured by the runtime consumption and, to a certain extent, reflects the performance of an algorithm. The original CSO algorithm involves steps such as population initialization, role-based grouping of roosters, hens, and chicks, position updates, and fitness evaluations.
Assuming the population size is N and the dimensionality of the solution space is D , the time required for population initialization is O N × D . Fitness evaluation has a time complexity of O N per iteration. During the grouping phase, assuming that re-grouping based on fitness occurs in each iteration and requires sorting, the time complexity is O N × N . When updating positions, the update rules differ for roosters, hens, and chicks. Assuming there are R N roosters, H N hens, and C N chicks, and considering that hens and chicks involve following roosters and other hens, the complexity may be higher for them. However, overall, the position update for each individual remains O D , so the time complexity of the entire position update step is O N × D . Combining these factors, the overall time complexity of the CSO algorithm is O N × D .
The proposed BLCSO algorithm introduces BOA search and Levy step size adjustment into the CSO algorithm. These two adjustment strategies solely modify the displacements of roosters and hens during the iterative process, without altering the initialization positions, fitness values, or the time complexity of position updates. Therefore, the overall time complexity of BLCSO remains O N × D . This indicates that the improved CSO algorithm does not affect the time complexity, and its computational efficiency remains undegraded.

3.1.3. Convergence Analysis of the BLCSO Algorithm

To demonstrate the global convergence of the BLCSO algorithm, this study employs a Markov chain model combined with the convergence criteria for stochastic algorithms to conduct the analysis. The proof is divided into two steps: first, proving that the BLCSO algorithm possesses Markovian properties, and second, demonstrating that the algorithm can converge to the global optimal solution.
Step 1: Prove that the BLCSO algorithm exhibits Markovian properties.
To prove this conclusion, the following definitions are first presented.
1.
Markov Model: Within the probability space Ω , F , P , consider a one-dimensional countable set of random variables X = x n : n > 0 , where each random variable takes values x = s i , with s i s satisfying:
p X t + 1 X t , , X 1 = p X t + 1 X 1
where X t is referred to as a Markov chain, where the countable set s Z serves as the state space. If the state space is finite and the transition probability p X t + 1 X t depends solely on the state at time t and is independent of the specific time instant t , the Markov chain is termed a finite homogeneous Markov chain.
2.
Definition of chicken states and state space: The position P of each chicken in the flock constitutes its individual state, and the set of all possible states of this chicken forms its state space, denoted as P = P P A , where A represents the feasible solution space.
3.
Definition of flock states and state space: The states of all chickens in the population constitute the flock state, denoted as s = P 1 , P 2 , , P n , where P i represents the state of the i -th chicken, and n is the total number of individuals in the flock. The set composed of all possible states in the flock forms the flock state space, denoted as S = s = P 1 , P 2 , , P n P i A , 1 i n .
4.
Equivalence of flock states:
For s S and P s , denote:
φ s ,   P = i = 1 n χ P P i
where χ P represents the indicator function of event P , and φ s , P denotes the number of chicken states included in the flock state s .
If there exist two flocks s 1 , s 2 S such that for any P A , φ s 1 , P = φ s 2 , P , then s 1 and s 2 are said to be equivalent, denoted as s 1 ~ s 2 .
5.
Equivalence classes of flock states: A population state partitioning model can be constructed based on the state equivalence relation. Let the universal set of population states be denoted as S . Then, the corresponding set of equivalence classes for population states can be formally defined as L = S / ~ , which possesses the following properties:
  • For any equivalence class L i L within the flock, any two states within it satisfy a complete equivalence relation: s i , s j L , s i ~ s j ;
  • There are no overlapping states between different equivalence classes, that is, L 1 L 2 = , L 1 L 2 .
6.
For any two states P i , P j S × S of an individual, if there exists a transition operator T s such that the state transition of the individual satisfies T s P i = P j , then the individual transition probability in the chicken swarm algorithm is given by:
P f T s P i = P j = P f r T s P i = P j ,   r o o s t e r s , P f h T s P i = P j ,   h e n s , P f c T s P i = P j ,   c h i c k e n ,
Proof. 
According to Definition (5) and the operational mechanism of the chicken swarm, the one-step transition probability of a rooster is as follows:
P f r T s P i = P j = 1 P i × R ,   P j P i , P i + P i × R ; 0 ,   e l s e ;
The one-step transition probability for a hen is:
P f h T s P i = P j = 1 C 1 × R 2 × P u P i + C 2 × R 3 × P v P i ,   P j P i , P i + C 1 × R 2 × P u P i + C 2 × R 3 × P v P i ; 0 ,   e l s e ;
The one-step transition probability for a chick is:
P f c T s P i = P j = 1 F L × P m P i ,   P j P i , P i + F L × P m P i ; 0 ,   e l s e ;
Then, if transferring the flock state from s i to s j is denoted as T s s i = s j , where s i s j , s j S , and i , j 1 , n , the transition probability of the flock state moving from s i to s j in one step is:
P f T s P i = P j = i = 1 n p T s P i n = P j n
It can thus be seen that the transition probability of any chicken’s state is solely dependent on the state at time t , namely, the random numbers F L , C 1 , C 2 in the algorithm at that moment, as well as the current positions of randomly selected individuals in various roles within the chicken swarm. Therefore, all transition probabilities for chickens in the swarm exhibit this property, indicating that the BLCSO algorithm possesses Markovian characteristics. Consequently, the state sequence of the chicken swarm in BLCSO, s T T > 0 , constitutes a finite homogeneous Markov chain. □
Step 2: Prove that the BLCSO algorithm possesses global convergence.
First, the following definitions are made.
1.
The global optimal solution to the optimization problem is denoted as P b e s t * , and the set of optimal solutions is G = s = P f P = f P b e s t * , s S , with G S .
2.
If for any initial state P 0 , it holds that lim t P f T > 0 P 0 = P 0 = 1 , then the algorithm is said to converge in probability to the global optimal solution.
Proof. 
During the iterative process of the BLCSO algorithm, it is necessary to continuously update the optimal positions of individuals.
P i T = P i T 1 ,   f P i T f P i T 1 P i T ,   f P i T < f P i T 1
where f P i T represents the fitness value of individual P i in the T -th iteration. Therefore, substitution occurs when the fitness value of the current optimal individual is less than that of the original individual:
P P i + 1 G P i G = 0
Considering that in the BLCSO algorithm, the chicken swarm state forms a finite homogeneous Markov chain, and within a given state space, the number of optimal solutions is monotonically non-decreasing. Therefore, the probability that the BLCSO algorithm fails to find the optimal solution after an infinite number of consecutive searches is zero. In other words, after a certain number of iterations, the state sequence of the chicken swarm will enter the optimal solution set G , and the chicken swarm algorithm will converge to the global optimum. The following equality holds:
lim T P f T = f = 1
where f T represents the optimal fitness value of P i T , and f * denotes the global optimal value. □
Based on the aforementioned proof, it can be concluded that the BLCSO algorithm exhibits global convergence.

3.2. The UBDO Framework Integrating Neural Networks with Novel Optimization Algorithms

3.2.1. BO-BP Neural Network Model

The BO algorithm is an algorithm that finds the optimal value by constructing the posterior probability of the output of a black box function when a limited number of sample points are known. Currently, the BO algorithm is a very important method in the field of hyperparameter optimization. Through the application of the BO algorithm, this study optimizes the parameters and hyperparameters of the BP neural network surrogate model. The primary purpose is to curb the detrimental effect of initial state randomness on the training and substantially enhance the model’s fitting accuracy, which is crucial for achieving high-quality modeling results.
The problem of optimizing the parameters and hyperparameters of the BP neural network can be mathematically represented by Equation (35):
x = argmin f x x χ
where x denotes the hyperparameter to be optimized, f is the objective function, x is the ideal hyperparameter combination, and χ is the feasible hyperparameter domain.
f M A P E = 1 n i = 1 n y i y ^ i y i × 100 %
where y i denotes the actual value and y ^ i denotes the predicted value.
E I x , D = max y y , 0 p y x , D d y
where p y x , D refers to the posterior probability obtained by the current probability surrogate model. y denotes the optimal value among the current samples.
The parameters to be optimized in this paper are the initial weight w 0 and the initial bias b 0 ; the hyperparameters subject to optimization encompass the quantity of neurons within the hidden layer, the number of iterative optimization steps for the BP neural network, the initial value assigned to the learning rate, the interval for dynamic learning rate descent, and the gradient for dynamic learning rate descent. This study selects the Mean Absolute Percentage Error of the BP neural network validation set is utilized as the objective function f of the BO algorithm for the purpose of assessing the model’s performance, as shown in Equation (36). The target MAPE (Mean Absolute Percentage Error) value is set to 5%. The maximum number of iterations is set to 1000. The TPE (Tree-structured Parzen Estimator) model with fast calculation speed is selected as the probabilistic surrogate model (probabilistic surrogate model) M to replace the objective function for iterative optimization of parameters and hyperparameters; the expected improvement function (Expected Improvement, EI) is selected as the acquisition function (acquisition function) S to determine the location of the optimal sample point, as shown in Equation (37).
BO-BP neural network model’s code flow [49] is as follows:
BO-BP neural network agent model
Input: f , χ , S , M
Output: Ideal parameters and hyperparameter combination x
1. Initial dataset f , X →D
2. for i = 1 , 2 , , T   d o
3.   p y x , D M , D
4.   x i argmin x χ S x , p y x , D
5.   y i f x i
6.   D D x i , y i
7. end for
The BO-BP neural network agent model construction process is as follows:
(1)
Determine the objective function, the feasible domain χ of parameters and hyperparameters, the acquisition function S , and the probabilistic surrogate model M .
(2)
Generate the initial dataset D : First, perform random sampling within the feasible domain of parameters and hyperparameters, and substitute the samples you obtained into the objective function to work out the target value, that is, y i = f x i . Therefore, the initial dataset D = x 1 , y 1 , x n , y n is obtained.
(3)
Specify the quantity of iterations T . This quantity denotes the number of instances the objective function is executed. Since the objective function requires a lot of computation, the number of iterations T cannot be too large.
(4)
Calculate the posterior probability p y x , D of the data sample based on the selected probabilistic surrogate model M and the generated dataset D .
(5)
Based on the posterior probability distribution obtained by the probabilistic surrogate model, the acquisition function S is used to select the next most promising parameter and hyperparameter combination x i .
(6)
Substitute the most promising parameter and hyperparameter combination into the objective function to calculate the target value y i that is, y i = f x i .
(7)
Add the new data sample x i , y i to the dataset D as a surrogate model for the historical information update probability.
(8)
Repeat steps (4) to (7) until the iteration is completed and the optimal combination of parameters and hyperparameters is output.
Compared to traditional hyperparameter optimization methods, such as grid search, random search, and GA, BO-BP demonstrates advantages in terms of optimization efficiency, global search capability, and adaptability. By constructing a probabilistic surrogate model of the hyperparameters and the objective function MAPE (Mean Absolute Percentage Error), BO-BP efficiently infers the optimal region using existing evaluation results, enabling the identification of hyperparameter combinations close to the global optimum with only a small number of iterations. The probabilistic surrogate model built on the Gaussian process can capture the nonlinear characteristics of the objective function, guiding the search to concentrate on the global optimal region and achieving faster convergence compared to other metaheuristic algorithms. Additionally, BO can automatically search for the optimal structure, reducing the need for manual intervention.

3.2.2. UBDO Framework

The proposed BO-BP neural network is employed for uncertainty analysis using the first-order reliability method, serving as a substitute for time-consuming constraint functions. Its main purpose is to mitigate the drawbacks of traditional methods, such as substantial computational expenses and sluggish convergence velocities, especially in the context of high-dimensional nonlinear problems. Combining the BO-BP algorithm with the BLCSO algorithm mentioned above, a new double-loop optimization design framework for structural reliability is established, and its overall process is shown in Figure 5.

4. Example Study Verification

4.1. Verification Based on Test Functions

To verify the advantages of the proposed BLCSO optimization algorithm over other optimization algorithms, the proposed BLCSO is compared with nine notable metaheuristic algorithms, namely PSO, Grey Wolf Optimizer (GWO) [51], Harris Hawks Optimization (HHO) [52], GA [20], Sparrow Search Algorithm (SSA) [53], and the original CSO. Table 1 meticulously details the relevant parameters for each algorithm.
To evaluate the performance of the algorithm, 30 repeated experiments were conducted on the CEC2017 function suite [10]. The CEC2017 test functions encompass unimodal functions, multimodal functions, composite functions, irregular functions, and rotated functions, among others. Among them, unimodal benchmark functions feature a single optimal solution without local optima, enabling the assessment of the algorithm’s convergence speed and accuracy. Multimodal benchmark functions contain multiple local optima, effectively testing the algorithm’s global search capability and computational precision. Composite and irregular functions, with higher complexity and greater resemblance to real-world engineering functions, can effectively evaluate the algorithm’s ability to handle highly nonlinear and high-dimensional problems. To further assess the algorithm’s performance, the population size for different heuristic algorithms was set to N P = 100, and the number of iterations was set to T max = 1000. The computational results of different algorithms on the benchmark test functions are included in Appendix B. One representative function from each category was selected for graphical illustration, as shown in Figure 6.
From the comparative results, it can be observed that BLCSO generally performs the best across almost all test functions, not only achieving the optimal optimization results but also demonstrating the highest stability. Compared to the latest optimization algorithms, the proposed optimization method exhibits significant superiority. During the long-term optimization iteration process, BLCSO demonstrates superior global iterative capability and exhibits a higher propensity to converge to the optimal solution.
To eliminate the impact of random factors during the evaluation process, statistical significance tests were added to further examine the performance of the BLCSO algorithm. The table in Appendix B presents the results of comparing the optimal solutions obtained by different algorithms and the BLCSO algorithm using the Wilcoxon signed-rank test. The results reveal that, for most test functions, the Wilcoxon test yields p-values substantially smaller than 0.05, and the Cliff’s delta effect size is less than −0.474. This indicates a statistically significant difference in performance between the BLCSO algorithm and other algorithms, with the BLCSO algorithm capable of obtaining smaller optimal solution values and demonstrating superior performance.

4.2. Validation of Uncertainty Optimization Case Studies

4.2.1. Mathematical Example

This example uses a mathematical example with two optimization variables, and its optimization model is as follows:
min   f μ x = μ x 1 + μ x 2 s . t . β 3.0 0 μ x i 10 , i = 1 , 2
where μ x represents the random variable x ’s mean value. The limit-state function of this example is as follows:
G X = x 1 2 x 2 20 1
Among them, both optimization variables obey normal distribution, x 1 ~ N 5 , 0.8 2 , x 2 ~ N 5 , 0.8 2 .
Table 2 presents the optimization iteration data of the method proposed in this paper. It can be observed that the proposed method achieves the optimal solution after 7 iterations, with the number of newly added sample points being 58.
Table 3 presents the optimization design methods based on the Kriging model (Kriging-method), Response Surface Methodology model (RSM-method), Support Vector Regression model (SVR-method), and Radial Basis Function model (RBF-method). These methods also employ a bi-level optimization strategy, with the BLCSO (a specific optimization algorithm) strategy adopted for the optimization process.
Additionally, the results of individual ablation experiments are provided: “BLCSO-None BP” denotes the optimization design results obtained using the BLCSO strategy without employing a surrogate model; “BLCSO-BP” represents the optimization design results based on a BP (Backpropagation) neural network model without BO; “CSO-BO-BP” indicates the optimization design results using a BP neural network model with BO but employing the original CSO (another optimization algorithm) instead of the BLCSO algorithm; “BLSO-PSO-BP” denotes a UBDO framework that integrates a BP model optimized by PSO with the BLCSO algorithm and “BLCSO-BO-BP” refers to the proposed optimization design results that utilize both the BLCSO algorithm and the BO-BP neural network model.
From the computational results, it can be observed that the BLCSO-BO-BP approach incurs the least computational cost compared to other surrogate models. Among the optimization design methods employing different surrogate models, the method based on the RSM model utilizes 67 sample points. In contrast, the optimization design methods based on the proposed BO-BP model, namely CSO-BO-BP and BLCSO-BO-BP, require a smaller number of sample points. Specifically, the computational cost of BLCSO-BO-BP is reduced by 13.43%. Compared to the BP model optimized by the PSO algorithm, the BP model optimized by BO demonstrates superior performance. Furthermore, when comparing the accuracy and reliability of different optimization design results, it is evident that the proposed method exhibits higher accuracy while maintaining reliability.
With the aim of more vividly illustrating the change in the objective function value across each iteration process, Figure 7 illustrates the iteration curve of the objective function for this specific case.

4.2.2. The 10-Rod Truss Structure

The 10-rod truss structure in this example is derived from engineering applications, and its structure is shown in Figure 8. The 10 rods are made of aluminum alloy and can be divided into three groups: the first group consists of 4 rods, placed horizontally; the second group consists of 4 rods, placed at a 45-degree angle; the third group consists of 2 rods, placed vertically.
Within the 10-bar truss structure presented in this section, the elastic modulus of the bar is 107 psi, and there are two concentrated external loads P at nodes (5) and (6), both of which are 105 1b. The cross-sectional area X of the bar obeys the normal distribution, and the mean is 3in2. The optimization model is:
min   f μ x = 360 μ x 9 + μ x 10 + μ x 8 + μ x 2 + μ x 1 + μ x 5 + 2 μ x 3 + μ x 4 + μ x 6 + μ x 7 s . t .   P 14 μ 6 X < 0 Φ 2 = 0.0228 1 μ x i 5 ; i = 1 , 2 , , 10
where μ 6 X represents the vertical displacement of node (6).
Table 4 shows the optimization results of the 10-bar truss structure. From the data in the table, it can be found that after 10 iterations, an ideal optimization result was obtained. A total of 876 new sample points was added during the optimization process. Figure 9 presents the iteration curve of the objective function in this computational example, visually demonstrating the variation in the objective function value.
Table 5 presents the optimization design results of different methods for the 10-rod truss structure. From the results, it can be observed that the number of limit-state evaluations invoked by the BO-BP model is significantly fewer than those based on Kriging and RSM models, while achieving higher accuracy. Furthermore, in terms of computational accuracy, the BLCSO-BO-BP method obtains a smaller objective function value while satisfying reliability constraints. In comparison, although the CSO-BO-BP method also yields optimization results with higher reliability, the value of the optimal solution is larger. Additionally, compared to BLCSO-BP, the proposed BLCSO-BO-BP method requires fewer limit-state evaluations and demonstrates higher accuracy.

4.2.3. Cantilever Beam Structure

This example uses a cantilever beam structure with a rectangular cross-section, as shown in Figure 10. Its optimization model is shown in Equation (41):
min   f = d 1 d 2 s . t . P x 3 600 d 1 d 2 x 1 d 2 + x 2 d 1 < 0 ϕ 2.5             P 2.5 4 × 10 6 x 4 d 1 d 2 x 1 2 d 2 4 + x 2 2 d 1 4 ϕ 3.5             0 d i 5 , i = 1 , 2
where d 1 and d 2 are optimization variables, and d 1 , d 2 = 2 , 4 . X = x 1 , x 2 , x 3 , x 4 T is a random variable, x 1 and x 2 represent external loads, x 3 represents material strength, and x 4 represents elastic modulus. The distribution type of x 1 , x 2 , x 3 , x 4 is normal distribution, with a mean of μ = 1000 , 500 , 4 × 10 4 , 29 × 10 6 T and a standard deviation of σ = 100 , 100 , 2 × 10 3 , 1.45 × 10 6 T .
Table 6 shows the cantilever beam structure’s optimization results. From the data in the table, it can be found that after 7 iterations, an ideal optimization result was obtained. The overall count of new sample points demanded in the course of calculation was 603.
Table 7 presents the computational results of different optimization design methods for this case study. It can be observed that other surrogate models have invoked more than 1000 limit-state evaluations, whereas the proposed BLCSO-BO-BP method has only required 603 evaluations. Furthermore, compared to the optimization design method based on CSO, the BLCSO-BO-BP method achieves a smaller objective function value while satisfying reliability constraints. Therefore, the proposed method demonstrates outstanding effectiveness and superiority.
With the aim of presenting the variation in the objective function value in each iteration process in a more intuitive manner, Figure 11 illustrates the iteration curve of the objective function pertaining to the cantilever beam structure.

4.2.4. Vehicle Side Impact

As illustrated in Figure 12, this case study focuses on the structural design for vehicle side impact. In this example, seven random design variables and four levels of random design parameters that follow a normal distribution are considered, with their statistical parameters presented in Table 8. In this computational example, the vehicle side impact protection guidelines specified by the European Enhanced Vehicle-safety Committee (EEVC) were adopted as the safety standards. According to these guidelines: the allowable value for passenger abdominal load is set at 1 kN, the allowable values for rib deflection (upper, middle, and lower) are set at 32 mm, the allowable values for the viscous criterion (upper, middle, and lower) are set at 0.32 m/s, the allowable value for the public combined force is set at 4.0 kN, and the allowable values for velocity at the midpoint of the B-pillar and at the B-pillar location on the front door are set at 9.9 mm/ms and 15.69 mm/ms, respectively.
Based on the above description, the mathematical optimization model for this case is formulated as follows:
min   f μ x = 1.98 + 4.9 μ x 1 + 6.67 μ x 2 + 6.98 μ x 3 + 4.01 μ x 4 + 1.78 μ x 5 + 2.73 μ x 7 s . t . P g i 0 Φ 3 , i = 1 , 2 , , 10 g 1 = 1 kN F A L ; g 2 = 32 mm D u p ; g 3 = 32 mm D m i d g 4 = 32 mm D l o w ; g 5 = 0.32 mm V C u p ; g 6 = 0.32 mm V C m i d ; g 7 = 0.32 mm V C l o w ; g 8 = 4 kN F p s ; g 9 = 9.9 mm / ms V B p i l l a r ; g 10 = 15.69 mm / ms V d o o r ;
Among them, the expression of the objective function is obtained through finite element analysis and response surface methodology. Additionally, the performance functions in the probabilistic constraint functions are derived using the same approach, specifically represented as:
F A L = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 D u p = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.77 x 7 x 8 + 0.32 x 6 x 10 D m i d = 33.86 + 2.95 x 3 + 0.1792 x 10 5.057 x 1 x 2 11 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 + 22 x 8 x 9 D l o w = 46.36 9.9 x 2 12.9 x 1 x 8 + 0.1107 x 3 x 10 V C u p = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 +   0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 +   0.00139 x 8 x 11 + 0.00001575 x 10 x 11 V C m i d = 0.2147 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 +   0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9   0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 V C l o w = 0.74 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 F p s = 4.72 0.54 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 V B p i l l a r = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 V d o o r = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2
Table 9 summarizes the optimization design results for vehicle side impact structures obtained using different optimization design methods.
From the aforementioned computational results, it can be observed that different optimization methods have all yielded final optimized outcomes. Among various surrogate models, BO-BP requires the smallest total number of sample points. In comparison, BLCSO-BO-BP reduces the computational cost by 37.16% compared to the commonly used Kriging model. Furthermore, when comparing CSO with BLCSO, it is evident that under the condition of employing the same surrogate model, the latter demonstrates superior optimization capability, resulting in more reliable and excellent design solutions. In summary, the proposed BLCSO-BO-BP in this study exhibits certain effectiveness and superiority.

5. Conclusions

In this study, for the UBDO research of practical engineering problems, a framework incorporating the BLCSO and BO-BP neural network models is proposed. By applying the butterfly optimization strategy and the Levy flight mechanism, the local search ability and global search ability of the CSO algorithm are effectively balanced, resulting in a more robust search performance. At the same time, the adoption of the BO-BP neural network model serves to diminish the computational cost of overall UBDO, paving the way for an efficient and accurate structural design while enhancing the overall design quality. Finally, the proposed method was validated through 29 test functions and four engineering examples. From the comparison among different optimization algorithms, it can be observed that BLCSO achieves significant improvements in computational accuracy and robustness compared to the original CSO, and demonstrates highly competitive performance relative to other state-of-the-art optimization algorithms. The results from the engineering examples reveal that BLCSO-BO-BP exhibits higher computational accuracy and lower computational cost. Furthermore, the ablation experiment results indicate that the BO-BP model achieves a substantial enhancement in computational efficiency compared to the BP model, while BLCSO demonstrates superior computational accuracy over CSO.
However, the surrogate model adopted in this study solely relies on a data-driven approach. With the advancement of science and technology, incorporating physical information into the development of surrogate models represents a promising new direction. Therefore, subsequent research will focus on integrating more advanced surrogate models to devise novel UBDO strategies. In addition, as engineering problems become increasingly complex, high-dimensional problems and highly nonlinear problems have gradually emerged as challenging issues. More effective solutions need to be proposed for problems of high complexity and engineering problems involving large-scale datasets. These will also be the focal points of subsequent research.

Author Contributions

Conceptualization, Q.J., R.L. and S.J.; Methodology, Q.J., R.L. and S.J.; Validation, Q.J., R.L. and S.J.; Formal analysis, Q.J., R.L. and S.J.; Resources, Q.J., R.L. and S.J.; Writing—original draft, Q.J., R.L. and S.J.; Writing—review and editing, Q.J., R.L. and S.J.; Funding acquisition, Q.J., R.L. and S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

To evaluate the optimization performance of the BL-COA algorithm when different values of R are selected, this study conducted tests using various benchmark functions. These benchmark functions encompass Unimodal Functions, Multimodal Functions, and Composite Functions. The expressions of these benchmark functions are provided in the Table A1, where D the dimension of the test function.
Table A1. Information on the test functions.
Table A1. Information on the test functions.
Function D Range
F 1 x = i = 1 n x i 2 30[−5.12, 5.12] D
F 2 x = i = 1 n 1 100 x i + 1 x 1 2 2 + x i 1 2 30[−2.048, 2.048] D
F 3 x = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30[−32.768, 32.768] D
F 4 x = A D + i = 1 D x i 2 A cos 2 π x i ,   A = 10 10[−5.12, 5.12] D
F 5 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600] D
F 6 x = 418.9829 D i = 1 D x i sin x i 50[−500, 500] D
The value range of the parameter r is set from 0 to 1, with a step size of 0.1. For each value of R , to evaluate the algorithm’s stability, the population size of the algorithm is set to 50, and the maximum number of iterations is set to 100. Thirty independent optimization experiments are conducted. Table A2 presents the convergence speed, as well as the minimum, maximum, average, and standard deviation of the optimal solutions obtained from multiple experiments. Use bold in the table to highlight the calculation results when R = 0.7.
Table A2. Results of sensitivity experiments.
Table A2. Results of sensitivity experiments.
FunctionsrThe minimum Value of the Optimal SolutionThe Average Value of the Optimal SolutionsThe Maximum Value of the Optimal SolutionThe Standard Deviation of the Optimal SolutionsConvergence Speed (s)
F10.11.2674 × 10−352.6401 × 10−331.2571 × 10−324.1469 × 10−330.029508
0.20.000156810.0020910.00477680.0015460.12979
0.32.071 × 10−2149.0618 × 10−699.0618 × 10−682.8656 × 10−680.030281
0.46.051 × 10−352.5161 × 10−252.5156 × 10−247.9549 × 10−250.022371
0.50.253920.521690.749010.134850.015554
0.64.2816 × 10−73.8951 × 10−67.0286 × 10−61.9831 × 10−60.030294
0.702.2684 × 10−1251.7343 × 10−1245.5562 × 10−1250.026011
0.83.3767 × 10−351.1359 × 10−338.1204 × 10−332.4834 × 10−330.028445
0.91.6341 × 10−377.7333 × 10−256.0758 × 10−241.8921 × 10−240.021513
16.9763 × 10−72.2024 × 10−65.8204 × 10−61.6029 × 10−60.028347
F20.1−418.33−415.76−405.753.77410.018862
0.2−418.33−412.94−409.254.22670.046129
0.3−418.33−417.56−416.140.74130.01932
0.4−418.33−413.42−410.144.23030.0077923
0.5−418.33−415.87−410.133.95680.013134
0.6−418.33−417.07−410.142.49650.02676
0.7−418.33−418.33−418.339.5296 × 10−50.018653
0.8−418.33−418.33−418.320.00444980.066814
0.9−418.33−417.8−416.140.766490.01952
1−418.33−416.96−410.142.45890.026582
F30.1−2.6097−2.2255−1.89990.244260.009646
0.2−2.7176−2.7155−2.70920.00245180.046611
0.3−2.718−2.7156−2.71280.00175830.046178
0.4−2.7179−2.7156−2.71320.0014810.046432
0.5−2.7183−2.7183−2.71837.0086 × 10−70.014749
0.6−2.7183−2.7183−2.71832.2083 × 10−70.021694
0.7−2.7183−2.7183−2.71834.6811 × 10−160.028963
0.8−2.7183−2.7183−2.71834.6811 × 10−160.02057
0.9−2.7183−2.7183−2.71833.7855 × 10−120.020766
1−2.7177−2.7152−2.7070.00320810.046641
F40.14.7243 × 10−70.000117080.000708760.000213270.046361
0.202.227310.9454.16220.020712
0.34.3039 × 10−83.83617.30242.24350.014527
0.41.4924 × 10−60.113261.03790.326130.045362
0.50.0676710.000168920.00095820.000296930.021115
0.606.0606 × 10−50.000289439.3136 × 10−50.0091597
0.700000.014531
0.801.6538 × 10−60.000135032.0429 × 10−60.028017
0.96.8531 × 10−60.000208140.00106830.000315160.019713
11.4704 × 10−60.000280760.00164920.000544830.067671
F50.10.433990.712570.951110.198680.00983
0.20.0152680.0826150.179840.0561730.014997
0.30.00383190.0969910.279210.0922490.046197
0.40.0024670.0241190.0813030.0274230.021265
0.50.00246630.00688870.0245510.00774570.028851
0.60.00246960.00695240.0368080.010620.068428
0.70.00246630.00443960.00985440.00279450.02841
0.80.00246630.00566930.0147620.00402340.040013
0.90.00246960.0132980.0418030.0130170.0207
10.00565310.0722930.266350.0772890.020923
F60.112,88013,97114,647596.080.20069
0.28749.510,71511,564845.160.11394
0.39231.311,28912,765986.730.025697
0.410,05312,51617,4381965.60.048951
0.57273.48865.610,4981089.20.043216
0.65447.89140.311,6941888.90.041474
0.72554.78678.29758.6927.630.032363
0.87341.49150.99776768.610.04254
0.99398.311,39613,0301183.20.025367
18641.89979.211,260831.140.11218

Appendix B

The following table presents the test results for the CEC2017 function suite. The provided data encompass the mean, standard deviation, p-value, Cliff’s delta value, and ranking of the optimal solutions obtained by different optimization algorithms.
Table A3. Results for the CEC2017 function suite.
Table A3. Results for the CEC2017 function suite.
FunctionIndicatorPSOGWOHHOGASSACSOBLCSO
F1Ave2.6948 × 106111501.1609 × 1077.453 × 1091.2153 × 1098.3445 × 1073960.1
Std2.7113 × 1065486.72.438 × 1065.8434 × 1098.4496 × 1084.046 × 1074204.3
p-valve2 × 10−62 × 10−62 × 10−62 × 10−62 × 10−62 × 10−6--
Cliff’s delta−1.000−1.000−1.000−1.000−1.000−1.000--
Rank3247651
F2Ave12,47752,70125,5991.2014 × 1052.372 × 10541,64212,879
Std6581.812,2548994.440,91852,0944050.7 4907.9
p-valve0.73432 × 10−69 × 10−62 × 10−61.7 × 10−60.0064--
Cliff’s delta−0.009−0.996−0.824−1.000−1.000−0.436--
Rank1536742
F3Ave540.11570.99575.94982.32499.08663.63476.68
Std12.77651.75963.045384.9817.73642.5149.8982
p-valve1.7 × 10−62.9 × 10−65.2 × 10−61.2 × 10−52.8 × 10−51.7 × 10−6--
Cliff’s delta−1.000−0.933−0.924−0.836−0.762−1.000--
Rank3457261
F4Ave655.13634.42736.12720.25755.18797.68629.43
Std29.00463.97443.66858.04159.34487.69121.356
p-valve0.000390.441.9 × 10−65.8 × 10−62.4 × 10−62.6 × 10−6--
Cliff’s delta−0.567−0.027−0.958−0.893−0.936−0.933--
Rank3254671
F5Ave645.55604.44662.85623.39647.5674.67608.1
Std6.12051.17782.907411.99311.10511.9941.6554
p-valve1.7 × 10−61.7 × 10−61.7 × 10−61.2 × 10−51.7 × 10−61.7 × 10−6--
Cliff’s delta−1.0000.922−1.000−0.840−1.000−1.000--
Rank4163572
F6Ave941.36907.421277.79831228.31291.7878.84
Std39.69832.512113.0898.17895.887140.1851.132
p-valve3.4 × 10−50.00321.7 × 10−65.3 × 10−51.7 × 10−61.7 × 10−6--
Cliff’s delta−0.722−0.462−1.000−0.729−1.000−0.984--
Rank3264571
F7Ave922.63976.41968.35975.111044.4896.27872.42
Std20.54712.26626.82642.98350.97326.26119.587
p-valve2.6 × 10−61.7 × 10−61.7 × 10−61.9 × 10−61.7 × 10−60.00031--
Cliff’s delta−0.931−1.000−0.996−0.951−0.998−0.571--
Rank3645721
F8Ave7298.55887.35318.29601.83866.52155.22049
Std942.081833.7178.372273.21806.6969.721478.8
p-valve1.7 × 10−62.9 × 10−61.7 × 10−61.7 × 10−60.000210.29--
Cliff’s delta−1.000−0.904−1.000−0.996−0.611−0.124--
Rank6547321
F9Ave5757.5675652545285.149756017.13945
Std494.77291.87709.68478.02379.36684.34252.38
p-valve1.7 × 10−61.7 × 10−62.9 × 10−61.9 × 10−61.9 × 10−61.7 × 10−6--
Cliff’s delta−1.000−1.000−0.924−0.976−0.971−0.991--
Rank5734261
F10Ave1331.21390.31322.61758.61270.35157.51248.9
Std44.56348.16242.595269.3965.0731132.442.233
p-valve4.7 × 10−61.7 × 10−67 × 10−62.6 × 10−60.0431.7 × 10−6--
Cliff’s delta−0.851−0.973−0.811−0.933−0.231−1.000--
Rank4536271
F11Ave4.0014 × 1061.1835 × 1083.035 × 1074.2306 × 1072.7555 × 1062.1966 × 1082.0016 × 105
Std1.7018 × 1061.018 × 1082.0077 × 1077.0855 × 1071.5392 × 1061.0772 × 1082.2045 × 105
p-valve1.9 × 10−61.6 × 10−55.8 × 10−60.00443.5 × 10−61.9 × 10−6--
Cliff’s delta−0.933−0.800−0.933−0.467−0.933−0.933--
Rank3645271
F12Ave45,7291.2644 × 1054.5846 × 1059.2591 × 10521,2065.3441 × 10528,735
Std31,092370901.3208 × 1051.9676 × 10611,8994.6919 × 10512,156
p-valve0.00361.9 × 10−61.7 × 10−60.0240.0983.4 × 10−5--
Cliff’s delta−0.389−0.964−1.000−0.3220.280−0.800--
Rank3457162
F13Ave86,6552.4615 × 105 1.3344 × 10530,9875773.61.1669 × 1063216.9
Std55,4222.1085 × 1051.2696 × 10527,3173860.47.3443 × 1053471.8
p-valve5.8 × 10−61.6 × 10−54.1 × 10−53.7 × 10−50.00244.3 × 10−6--
Cliff’s delta−0.933−0.800−0.800−0.756−0.429−0.933--
Rank4653271
F14Ave18,29250,61735,6151.0176 × 1067945.962,4527459.3
Std10,71627,78712,3781.9419 × 1063798.726,8044263.2
p-valve6.9 × 10−55.8 × 10−61.9 × 10−60.010.252.1 × 10−6--
Cliff’s delta−0.696−0.924−0.940−0.400−0.131−0.933--
Rank3547261
F15Ave2666.43242.63088.43142.225573533.52273.4
Std247.5479.41377.89389.89134.76656.39152.23
p-valve5.8 × 10−62.4 × 10−62.4 × 10−61.9 × 10−63.9 × 10−62.6 × 10−6--
Cliff’s delta−0.873−0.936−0.940−0.940−0.871−0.933--
Rank3645271
F16Ave2165.72077.92395.524652555.22512.82065.5
Std261.0259.968299.12 248.41249.73265.94103.28
p-valve0.020.183.1 × 10−55.2 × 10−62.4 × 10−63.9 × 10−6--
Cliff’s delta−0.271−0.167−0.753−0.884−0.927−0.911--
Rank3245761
F17Ave2.3416 × 1062.0959 × 1066.1878 × 105 7.125 × 1057.069 × 1065.9273 × 10582,689
Std5.5303 × 1051.9468 × 1062.919 × 1057.4786 × 1055.4647 × 1065.3013 × 10518,258
p-valve1.7 × 10−64.1 × 10−52.6 × 10−60.000241.2 × 10−54.4 × 10−5--
Cliff’s delta−1.000−0.800−0.933−0.549−0.800−0.747--
Rank6534721
F18Ave28,2902.4408 × 1053.1424 × 1053.8196 × 1067.4957 × 1069.436 × 1054762.1
Std19,78053,1981.8894 × 1054.6224 × 1064.5621 × 1067.145 × 1054389.2
p-valve2.2 × 10−51.7 × 10−63.9 × 10−60.000333.9 × 10−61.2 × 10−5--
Cliff’s delta−0.809−1.000−0.933−0.533−0.933−0.831--
Rank2346751
F19Ave2526.52386.227672449.326202694.32250.3
Std231.6148.13245.7234.24238.29313.5551.841
p-valve2 × 10−50.000111.9 × 10−60.000215.2 × 10−67.7 × 10−6--
Cliff’s delta−0.811−0.642−0.933−0.611−0.916−0.893--
Rank4273561
F20Ave24532488.32553.52460.52500.82550.42396.9
Std55.29912.03939.85340.85145.70484.67742.472
p-valve0.000171.7 × 10−61.7 × 10−62.2 × 10−52.9 × 10−62.9 × 10−6--
Cliff’s delta−0.624−0.987−0.996−0.769−0.922−0.909--
Rank2473561
F21Ave5069.14964.97475.45617.33102.57781.63428.9
Std2418.5626.63358.732039.81789.3790.321321.3
p-valve0.00151.1 × 10−51.7 × 10−67.5 × 10−50.861.7 × 10−6--
Cliff’s delta−0.487−0.764−1.000−0.6840.087−0.996--
Rank4365172
F22Ave2980.12829.53147.22796.63042.33067.92770.3
Std90.32617.92375.8541.128135.32104.5129.821
p-valve1.9 × 10−62.1 × 10−61.7 × 10−60.00242.1 × 10−61.7 × 10−6--
Cliff’s delta−0.942−0.929−1.000−0.440−0.933−0.984--
Rank4372561
F23Ave3093.33029.33401.12971.53116.73230.12920.5
Std81.28815.503479.46524.971103.6699.54383.648
p-valve2.9 × 10−64.3 × 10−61.7 × 10−60.000663.9 × 10−61.7 × 10−6--
Cliff’s delta−0.882−0.858−1.000−0.549−0.878−0.984--
Rank4372561
F24Ave29402896.82936.43169.42894.43062.92995.9
Std18.3692.258628.62209.5528.329550.95114.57
p-valve1.7 × 10−61.7 × 10−62.4 × 10−60.000281.7 × 10−61.2 × 10−5--
Cliff’s delta0.9711.0000.853−0.5581.000−0.842--
Rank4137265
F25Ave6237.95502.469855623.75898.37441.64513.5
Std642.06142.642176.7608.531951.41689.4155.07
p-valve1.9 × 10−61.7 × 10−62.6 × 10−52.9 × 10−60.000963.2 × 10−6--
Cliff’s delta−0.971−1.000−0.800−0.929−0.476−0.933--
Rank5263471
F26Ave3272.93228.73476.23247.13278.134343235.8
Std18.594.3929104.8515.110.19280.49614.667
p-valve2.9 × 10−60.0321.9 × 10−60.00151.7 × 10−61.9 × 10−6--
Cliff’s delta−0.8960.316−0.933−0.471−0.996−0.940--
Rank4173562
F27Ave3297.23299.833104510.53487.33370.73243.9
Std30.38211.63940.678476.7672.12236.42624.532
p-valve4.7 × 10−61.7 × 10−65.2 × 10−61.9 × 10−61.7 × 10−61.7 × 10−6--
Cliff’s delta−0.860−0.982−0.878−0.962−1.000−0.996--
Rank2347651
F28Ave4228.14065.44621.64027.23943.751573837.9
Std307.46141.3892.231271.55173.57502.42187.59
p-valve2.8 × 10−53.4 × 10−51.7 × 10−60.00160.00571.9 × 10−6--
Cliff’s delta−0.756−0.722−1.000−0.482−0.384−0.964--
Rank5463271
F29Ave1.7902 × 1051.2564 × 1052.1557 × 1063.256 × 1058.9325 × 1061.6245 × 10716,244
Std1.5375 × 10555,2188.6316 × 1053.3875 × 1059.4394 × 1066.1396 × 1068111.9
p-valve3.4 × 10−52.4 × 10−61.9 × 10−69.7 × 10−56.3 × 10−51.9 × 10−6--
Cliff’s delta−0.800−0.933−0.933−0.667−0.667−0.933--
Rank3254671

References

  1. Luo, C.; Zhu, S.P.; Keshtegar, B.; Macek, W.; Branco, R.; Meng, D. Active Kriging-based conjugate first-order reliability method for highly efficient structural reliability analysis using resample strategy. Comput. Methods Appl. Mech. Eng. 2024, 423, 116863. [Google Scholar] [CrossRef]
  2. Meng, D.; Yang, S.; Yang, H.; De Jesus, A.M.; Correia, J.; Zhu, S.P. Intelligent-inspired framework for fatigue reliability evaluation of offshore wind turbine support structures under hybrid uncertainty. Ocean Eng. 2024, 307, 118213. [Google Scholar] [CrossRef]
  3. Meng, D.; Yang, H.; Yang, S.; Zhang, Y.; De Jesus, A.M.; Correia, J.; Zhu, S.P. Kriging-assisted hybrid reliability design and optimization of offshore wind turbine support structure based on a portfolio allocation strategy. Ocean Eng. 2024, 295, 116842. [Google Scholar] [CrossRef]
  4. Wang, B.; Zhu, Q. Stability analysis of discrete-time semi-Markov jump linear systems. IEEE Trans. Autom. Control 2020, 65, 5415–5421. [Google Scholar] [CrossRef]
  5. Yang, X.; Zhu, Q. Stabilization of stochastic retarded systems based on sampled-data feedback control. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 5895–5904. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Liu, H.; Wu, T.; Xu, J.; Jiang, C. A novel reliability-based design optimization method through instance-based transfer learning. Comput. Methods Appl. Mech. Eng. 2024, 432, 117388. [Google Scholar] [CrossRef]
  7. Meng, D.; Yang, S.; De Jesus, A.M.; Fazeres-Ferradosa, T.; Zhu, S.P. A novel hybrid adaptive Kriging and water cycle algorithm for reliability-based design and optimization strategy: Application in offshore wind turbine monopile. Comput. Methods Appl. Mech. Eng. 2023, 412, 116083. [Google Scholar] [CrossRef]
  8. Meng, D.; Yang, S.; De Jesus, A.M.; Zhu, S.P. A novel Kriging-model-assisted reliability-based multidisciplinary design optimization strategy and its application in the offshore wind turbine tower. Renew. Energy 2023, 203, 407–420. [Google Scholar] [CrossRef]
  9. Lai, X.; Chen, Y.; Zhang, Y.; Wang, C. Fast solution of reliability-based robust design optimization by reducing the evaluation number for the performance functions. Int. J. Struct. Integr. 2023, 14, 946–965. [Google Scholar] [CrossRef]
  10. Yang, S.; Guo, C.; Meng, D.; Guo, Y.; Guo, Y.; Pan, L.; Zhu, S.P. MECSBO: Multi-strategy enhanced circulatory system based optimisation algorithm for global optimisation and reliability-based design optimisation problems. IET Collab. Intell. Manuf. 2024, 6, e12097. [Google Scholar] [CrossRef]
  11. Huang, Y.; Zhu, Q. pth Moment Exponential Stability of Highly Nonlinear Neutral Hybrid Stochastic Delayed Systems with Impulsive Effect; IEEE: New York, NY, USA, 2025. [Google Scholar]
  12. Meng, Z.; Li, C.; Hao, P. Unified reliability-based design optimization with probabilistic, uncertain-but-bounded and fuzzy variables. Comput. Methods Appl. Mech. Eng. 2023, 407, 115925. [Google Scholar] [CrossRef]
  13. van Mierlo, C.; Persoons, A.; Faes, M.G.; Moens, D. Robust design optimisation under lack-of-knowledge uncertainty. Comput. Struct. 2023, 275, 106910. [Google Scholar] [CrossRef]
  14. Farahmand-Tabar, S.; Shirgir, S. Positron-enabled atomic orbital search algorithm for improved reliability-based design optimization. In Handbook of Formal Optimization; Springer Nature: Singapore, 2024; pp. 389–418. [Google Scholar]
  15. Ebrahimi, B.; Bataleblu, A.A. Intelligent reliability-based design optimization: Past and future research trends. In Developments in Reliability Engineering; Elsevier: Amsterdam, The Netherlands, 2024; pp. 787–826. [Google Scholar]
  16. Hu, W.; Cheng, S.; Yan, J.; Cheng, J.; Peng, X.; Cho, H.; Lee, I. Reliability-based design optimization: A state-of-the-art review of its methodologies, applications, and challenges. Struct. Multidiscip. Optim. 2024, 67, 168. [Google Scholar] [CrossRef]
  17. Yang, S.; Meng, D.; Guo, Y.; Nie, P.; de Jesus, A.M. A reliability-based design and optimization strategy using a novel MPP searching method for maritime engineering structures. Int. J. Struct. Integr. 2023, 14, 809–826. [Google Scholar] [CrossRef]
  18. Yang, S.; Meng, D.; Wang, H.; Yang, C. A novel learning function for adaptive surrogate-model-based reliability evaluation. Philos. Trans. R. Soc. A 2024, 382, 20220395. [Google Scholar] [CrossRef]
  19. Abualigah, L.; Sheikhan, A.; Ikotun, A.M.; Zitar, R.A.; Alsoud, A.R.; Al-Shourbaji, I.; Jia, H. Particle Swarm Optimization Algorithm: Review and Applications; Elsevier: Amsterdam, The Netherlands, 2024. [Google Scholar]
  20. Alhijawi, B.; Awajan, A. Genetic algorithms: Theory, genetic operators, solutions, and applications. Evol. Intell. 2024, 17, 1245–1256. [Google Scholar] [CrossRef]
  21. Zhang, Y.Q.; Wang, J.H.; Wang, Y.; Jia, Z.C.; Sun, Q.; Pei, Q.Y.; Wu, D. Intelligent planning of fire evacuation routes in buildings based on improved adaptive ant colony algorithm. Comput. Ind. Eng. 2024, 194, 110335. [Google Scholar] [CrossRef]
  22. He, K.; Zhang, Y.; Wang, Y.K.; Zhou, R.H.; Zhang, H.Z. EABOA: Enhanced adaptive butterfly optimization algorithm for numerical optimization and engineering design problems. Alex. Eng. J. 2024, 87, 543–573. [Google Scholar] [CrossRef]
  23. Meng, D.; Zhu, S.-P. Multidisciplinary Design Optimization of Complex Structures Under Uncertainty, 1st ed.; CRC Press: Boca Raton, FL, USA, 2024; ISBN 9781003464792. [Google Scholar]
  24. Ghasemi, M.; Zare, M.; Trojovský, P.; Rao, R.V.; Trojovská, E.; Kandasamy, V. Optimization based on the smart behavior of plants with its engineering applications: Ivy algorithm. Knowl.-Based Syst. 2024, 295, 111850. [Google Scholar] [CrossRef]
  25. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  26. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  27. Meng, D.; Yang, S.; Zhang, Y.; Zhu, S.-P. Structural reliability analysis and uncertainties-based collaborative design and optimization of turbine blades using surrogate model. Fatigue Fract. Eng. Mater. Struct. 2019, 42, 1219–1227. [Google Scholar] [CrossRef]
  28. Meng, Z.; Yıldız, B.S.; Li, G.; Zhong, C.; Mirjalili, S.; Yildiz, A.R. Application of state-of-the-art multiobjective metaheuristic algorithms in reliability-based design optimization: A comparative study. Struct. Multidiscip. Optim. 2023, 66, 191. [Google Scholar] [CrossRef]
  29. Hamza, F.; Ferhat, D.; Abderazek, H.; Dahane, M. A new efficient hybrid approach for reliability-based design optimization problems. Eng. Comput. 2022, 38, 1953–1976. [Google Scholar] [CrossRef]
  30. Zadeh, P.M.; Mohagheghi, M. An efficient Bi-level hybrid multi-objective reliability-based design optimization of composite structures. Compos. Struct. 2022, 296, 115862. [Google Scholar] [CrossRef]
  31. Chun, J. Reliability-based design optimization of structures using complex-step approximation with sensitivity analysis. Appl. Sci. 2021, 11, 4708. [Google Scholar] [CrossRef]
  32. Lai, X.; Huang, J.; Zhang, Y.; Wang, C.; Zhang, X. A general methodology for reliability-based robust design optimization of computation-intensive engineering problems. J. Comput. Des. Eng. 2022, 9, 2151–2169. [Google Scholar] [CrossRef]
  33. Meng, X.J.; Zhang, L.X.; Pan, Y.; Liu, Z.M. Reliability-based multidisciplinary concurrent design optimization method for complex engineering systems. Eng. Optim. 2022, 54, 1374–1394. [Google Scholar] [CrossRef]
  34. Li, X.; Zhu, H.; Chen, Z.; Ming, W.; Cao, Y.; He, W.; Ma, J. Limit state Kriging modeling for reliability-based design optimization through classification uncertainty quantification. Reliab. Eng. Syst. Saf. 2022, 224, 108539. [Google Scholar] [CrossRef]
  35. Meng, D.; Li, Y.; He, C.; Guo, J.; Lv, Z.; Wu, P. Multidisciplinary design for structural integrity using a collaborative optimization method based on adaptive surrogate modelling. Mater. Des. 2021, 206, 109789. [Google Scholar] [CrossRef]
  36. Yu, C.; Lv, X.; Huang, D.; Jiang, D. Reliability-based design optimization of offshore wind turbine support structures using RBF surrogate model. Front. Struct. Civ. Eng. 2023, 17, 1086–1099. [Google Scholar] [CrossRef]
  37. Allahvirdizadeh, R.; Andersson, A.; Karoumi, R. Improved dynamic design method of ballasted high-speed railway bridges using surrogate-assisted reliability-based design optimization of dependent variables. Reliab. Eng. Syst. Saf. 2023, 238, 109406. [Google Scholar] [CrossRef]
  38. Fu, C.; Liu, J.; Xu, W.; Yu, H. A reliability based multidisciplinary design optimization method with multi-source uncertainties. J. Phys. Conf. Ser. 2020, 1654, 012043. [Google Scholar] [CrossRef]
  39. Ni, P.; Li, J.; Hao, H.; Zhou, H. Reliability based design optimization of bridges considering bridge-vehicle interaction by Kriging surrogate model. Eng. Struct. 2021, 246, 112989. [Google Scholar] [CrossRef]
  40. Su, X.; Hong, Z.; Xu, Z.; Qian, H. An improved CREAM model based on Deng entropy and evidence distance. Nucl. Eng. Technol. 2025, 57, 103485. [Google Scholar] [CrossRef]
  41. Huang, J.; Ai, Q. Key vulnerability parameters for steel pipe pile-supported wharves considering the uncertainties in structural design. Int. J. Ocean Syst. Manag. 2025, 2, 35–51. [Google Scholar] [CrossRef]
  42. Correia, J.A.; Haselibozchaloee, D.; Zhu, S.P. A review on fatigue design of offshore structures. Int. J. Ocean Syst. Manag. 2025, 2, 1–18. [Google Scholar] [CrossRef]
  43. Su, X.; Zhong, J.; Hong, Z.; Qian, H.; Pelusi, D. A novel belief entropy and its application in cooperative situational awareness. Expert Syst. Appl. 2025, 286, 128027. [Google Scholar] [CrossRef]
  44. Yang, S.; Chen, Y. Modelling and analysis of offshore wind turbine gearbox under multi-field coupling. Int. J. Ocean. Syst. Manag. 2025, 2, 52–66. [Google Scholar] [CrossRef]
  45. Gargama, H.; Chaturvedi, S.K.; Rai, R.N. Genetic Algorithm and Artificial Neural Networks in Reliability-Based Design Optimization. Reliab. Anal. Mod. Power Syst. 2024, 8, 125–141. [Google Scholar]
  46. Wang, Y.; Sha, W.; Xiao, M.; Qiu, C.W.; Gao, L. Deep-Learning-Enabled Intelligent Design of Thermal Metamaterials (Adv. Mater. 33/2023). Adv. Mater. 2023, 35, 2370237. [Google Scholar] [CrossRef]
  47. Chen, B.; Cao, L.; Chen, C.; Chen, Y.; Yue, Y. A comprehensive survey on the chicken swarm optimization algorithm and its applications: State-of-the-art and research challenges. Artif. Intell. Rev. 2024, 57, 170. [Google Scholar] [CrossRef]
  48. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  49. Dong, Z.; Sheng, Z.; Zhao, Y.; Zhi, P. Robust optimization design method for structural reliability based on active-learning MPA-BP neural network. Int. J. Struct. Integr. 2023, 14, 248–266. [Google Scholar] [CrossRef]
  50. Song, X.; Zou, L.; Tang, M. An improved Monte Carlo reliability analysis method based on BP neural network. Appl. Sci. 2025, 15, 4438. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  52. Zhong, C.; Wang, M.; Dang, C.; Ke, W.; Guo, S. First-order reliability method based on Harris Hawks Optimization for high-dimensional reliability analysis. Struct. Multidiscip. Optim. 2020, 62, 1951–1968. [Google Scholar] [CrossRef]
  53. Gharehchopogh, F.S.; Namazi, M.; Ebrahimi, L.; Abdollahzadeh, B. Advances in sparrow search algorithm: A comprehensive survey. Arch. Comput. Methods Eng. 2023, 30, 427–455. [Google Scholar] [CrossRef]
  54. Zhang, Z.; Deng, W.; Jiang, C. A PDF-based performance shift approach for reliability-based design optimization. Comput. Methods Appl. Mech. Eng. 2021, 374, 113610. [Google Scholar] [CrossRef]
Figure 1. Main structure of BP neural network.
Figure 1. Main structure of BP neural network.
Applsci 15 09671 g001
Figure 2. BP neural network feedforward transmission.
Figure 2. BP neural network feedforward transmission.
Applsci 15 09671 g002
Figure 3. Schematic Diagram of the Heavy-Tailed Stable Distribution Characteristics of Levy Flight.
Figure 3. Schematic Diagram of the Heavy-Tailed Stable Distribution Characteristics of Levy Flight.
Applsci 15 09671 g003
Figure 4. Flow chart of BLCSO algorithm.
Figure 4. Flow chart of BLCSO algorithm.
Applsci 15 09671 g004
Figure 5. Flowchart of Uncertainty design optimization method.
Figure 5. Flowchart of Uncertainty design optimization method.
Applsci 15 09671 g005
Figure 6. Comparison of Iterative Processes among Different Optimization Algorithms.
Figure 6. Comparison of Iterative Processes among Different Optimization Algorithms.
Applsci 15 09671 g006
Figure 7. Objective function iteration process in the mathematical example.
Figure 7. Objective function iteration process in the mathematical example.
Applsci 15 09671 g007
Figure 8. The 10-bar truss structure.
Figure 8. The 10-bar truss structure.
Applsci 15 09671 g008
Figure 9. Objective function iteration process in the example of the 10-rod truss structure.
Figure 9. Objective function iteration process in the example of the 10-rod truss structure.
Applsci 15 09671 g009
Figure 10. Cantilever beam structure.
Figure 10. Cantilever beam structure.
Applsci 15 09671 g010
Figure 11. Objective function iteration process in the example of the cantilever beam structure.
Figure 11. Objective function iteration process in the example of the cantilever beam structure.
Applsci 15 09671 g011
Figure 12. Schematic Diagram of Vehicle Side Impact Structure [54].
Figure 12. Schematic Diagram of Vehicle Side Impact Structure [54].
Applsci 15 09671 g012
Table 1. The parameter of metaheuristic algorithm.
Table 1. The parameter of metaheuristic algorithm.
AlgorithmParameters
All algorithmsMaximum iterative number T max , Population size N P
PSOAcceleration constant c 1 = c 2 = 2 ; Weight factor 0.9;
GWOUniversal parameters (i.e., T max and N P )
GARecombination probability: 0.7; Mutation probability: 0.001
SSARatio of discoverer: 0.7; Ratio of vigilant: 0.2;
Ratio of followers: 0.1
BLCSO R N = 15 , H N = 70 , C N = 15 , M N = 50 , G = 10 , F L 0.4 , 1
Table 2. The iteration process of this method in the mathematical example.
Table 2. The iteration process of this method in the mathematical example.
Iterations μ x 1 μ x 2 β f Number of Sample Points
1553.66081026
24.53424.12492.94578.325413
34.92743.54292.97398.53429
45.23153.42213.19018.52125
55.11563.40453.04518.52423
65.11583.40113.00128.52342
75.11253.40583.00018.5240
Table 3. Contrast of the optimization results yielded by various methods in the mathematical example.
Table 3. Contrast of the optimization results yielded by various methods in the mathematical example.
Methods μ x 1 μ x 2 β f Number of Sample Points
Kriging-method5.25873.40983.00778.668571
RSM-method5.28163.40793.10218.689567
SVR-method5.31473.41023.18928.724982
RBF-method5.29063.41273.10888.703365
BLCSO-None BP5.11383.40592.99108.5197--
BLCSO-BP 5.26323.41492.99578.678177
CSO-BO-BP5.29753.41213.01218.709663
BLSO-PSO-BP5.27883.40993.09048.688765
BLCSO-BO-BP5.11373.40653.00048.520258
Table 4. The selection process of this method in the example of the 10-rod truss structure.
Table 4. The selection process of this method in the example of the 10-rod truss structure.
Iterations μ x 1 μ x 2 μ x 3 μ x 4 μ x 5 μ x 6 μ x 7 μ x 8 μ x 9 μ x 10 β f Number of Sample Points
133333333332.0812,589192
23.42352.43922.46832.44563.56742.43552.46572.54352.44522.44151.5711,067125
33.98752.67592.35262.15743.65211.96842.89762.08432.08812.08061.2510,73898
43.68971.69841.52351.57324.46241.98082.86491.53321.53782.45812.249580121
53.84252.35621.36241.53213.60232.45572.95361.23431.32872.1083−0.89943879
64.08741.67491.23411.23513.79023.02421.98851.09851.10233.20190.09919382
74.15371.96851.34421.02315.03122.55641.567611.02842.14731.36882365
84.02131.7694114.32352.75462.423111.00352.67491.96897943
93.99791.9534114.45632.86452.1893112.74342.00904525
Table 5. Contrast of the optimization results yielded by various methods in the example of the 10-rod truss structure.
Table 5. Contrast of the optimization results yielded by various methods in the example of the 10-rod truss structure.
MethodKriging-MethodRSM-MethodSVR-MethodRBF-MethodBLCSO-None BPBLCSO-BPCSO-BO-BPBLCSO-PSO-BPBLCSO-BO-BP
μ x 1 3.78694.02613.82783.99723.79223.96783.97063.98853.9979
μ x 2 1.94761.90701.95461.95721.95951.97571.90311.90221.9534
μ x 3 111111111
μ x 4 111111111
μ x 5 4.98344.37894.95754.85404.56554.74314.72694.51994.4563
μ x 6 2.87202.79422.86492.80032.90352.93922.90462.89762.8645
μ x 7 2.24612.34162.21572.34192.18492.26552.20972.21022.1893
μ x 8 11.080411.004211111
μ x 9 111111111
μ x 10 2.47592.68942.59702.59152.79342.51722.82352.83002.7434
β 2.00272.0822.10122.15671.99272.09882.11211.99082.0015
f 9093.79062.49126.19181.79048.69141.49174.79098.89045.6
Sample data volume2594483969483171--550213441499830
Table 6. The iteration process of this method in the example of the cantilever beam structure.
Table 6. The iteration process of this method in the example of the cantilever beam structure.
Iterations d 1 d 2 β 1 β 2 f Number of Sample Points
1240.53660.2312895
22.46833.64822.08432.48739.0049114
32.68943.58422.48383.87259.639379
42.58493.64712.38473.67849.427474
52.44633.75922.48974.09219.196185
62.44933.75422.59333.54039.195282
72.45123.75492.58933.50819.204074
Table 7. Comparison of optimization results of different methods in the example of the cantilever beam structure.
Table 7. Comparison of optimization results of different methods in the example of the cantilever beam structure.
Method d 1 d 2 f β 1 β 2 Number of Sample Points
Kriging-method2.54363.63989.25822.61273.61342894
RSM-method2.47333.75369.28382.63243.60972916
SVR-method2.78503.754710.45682.75753.59651934
RBF-method2.51763.69709.30762.59723.54841655
BLCSO-None BP2.45753.75509.22792.51473.6058--
BLCSO-BP 2.50083.61499.04012.42183.59171678
CSO-BO-BP2.47923.79579.41032.70353.74911125
BLCSO-PSO-BP2.46603.78199.32612.65993.62511092
BLCSO-BO-BP 2.45123.75499.20402.58933.5081603
Table 8. Statistical Information of Random Variables and Parameters in Vehicle Side Impact Cases.
Table 8. Statistical Information of Random Variables and Parameters in Vehicle Side Impact Cases.
Random VariableSymbolUpper BoundaryMeanLower BoundaryStandard Deviation
Thickness of the inner side of the B-pillar x 1 0.5 μ x 1 1.50.03
Reinforcement thickness of the B-pillar x 2 0.45 μ x 2 1.350.03
Thickness of the inner side of the floor panel x 3 0.5 μ x 3 1.50.03
Thickness of transverse members x 4 0.5 μ x 4 1.50.03
Thickness of the door beam x 5 0.875 μ x 5 2.6250.05
Thickness of reinforcement for the door wire x 6 0.4 μ x 6 1.20.03
Thickness of the roof side rail x 7 0.4 μ x 7 1.20.03
Material strength of the inner side of the B-pillar x 8 --0.192/0.345--0.006
Strength of the inner layer material on the side of the floor x 9 --0.192/0.345--0.006
Obstacle height x 10 --0--10
Impact location of the obstacle x 11 --0--10
Table 9. Comparison of optimization results of different methods in the example of the vehicle side impact.
Table 9. Comparison of optimization results of different methods in the example of the vehicle side impact.
Method d f Minimum Reliability IndexNumber of Sample Points
Kriging-method(0.8344, 1.4387, 0.7381, 1.5000,
1.0765, 1.2000, 0.7195)
30.71202.99933170
RSM-method(0.7904, 1.2754, 0.7620, 1.5000,
1.0679, 1.2000, 0.7551)
29.65592.93503851
SVR-method(0.8162, 1.3119, 0.7498, 1.4997,
1.0740, 1.2000, 0.7583)
29.95902.91432988
RBF-method(0.7822, 1.3813, 0.7551, 1.5000,
1.0506, 1.2000, 0.6991)
30.09032.92542890
BLCSO-None BP(0.7872, 1.3500, 0.6887, 1.5000,
1.0706, 1.2000, 0.7284)
29.55813.0046--
BLCSO-BP (0.7890, 1.3959, 0.7547, 1.5000,
1.0493, 1.2000, 0.7288)
30.29692.94072874
CSO-BO-BP(0.8168, 1.3489, 0.7445, 1.5000,
1.0646, 1.2000, 0.7283)
30.07432.91202176
BLCSO-PSO-BP(0.8090, 1.3502, 0.7299, 1.5000,
1.0753, 1.2000, 0.7183)
33.51853.10822209
BLCSO-BO-BP (0.7971, 1.2813, 0.6948, 1.5000,
1.0317, 1.2000, 0.7284)
29.12172.97661992
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, Q.; Li, R.; Jing, S. Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network. Appl. Sci. 2025, 15, 9671. https://doi.org/10.3390/app15179671

AMA Style

Ji Q, Li R, Jing S. Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network. Applied Sciences. 2025; 15(17):9671. https://doi.org/10.3390/app15179671

Chicago/Turabian Style

Ji, Qiang, Ran Li, and Shi Jing. 2025. "Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network" Applied Sciences 15, no. 17: 9671. https://doi.org/10.3390/app15179671

APA Style

Ji, Q., Li, R., & Jing, S. (2025). Uncertainty-Based Design Optimization Framework Based on Improved Chicken Swarm Algorithm and Bayesian Optimization Neural Network. Applied Sciences, 15(17), 9671. https://doi.org/10.3390/app15179671

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop