Next Article in Journal
Post-Quantum Signature Scheme Based on the Root Extraction Problem over Mihailova Subgroups of Braid Groups
Next Article in Special Issue
Improving Wild Horse Optimizer: Integrating Multistrategy for Robust Performance across Multiple Engineering Problems and Evaluation Benchmarks
Previous Article in Journal
Analytic and Asymptotic Properties of the Generalized Student and Generalized Lomax Distributions
Previous Article in Special Issue
Biogeography-Based Teaching Learning-Based Optimization Algorithm for Identifying One-Diode, Two-Diode and Three-Diode Models of Photovoltaic Cell and Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fault Prediction of Control Clusters Based on an Improved Arithmetic Optimization Algorithm and BP Neural Network

1
The College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
Jiangsu Automation Research Institute, Lianyungang 222061, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2891; https://doi.org/10.3390/math11132891
Submission received: 16 May 2023 / Revised: 17 June 2023 / Accepted: 22 June 2023 / Published: 27 June 2023
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)

Abstract

:
Higher accuracy in cluster failure prediction can ensure the long-term stable operation of cluster systems and effectively alleviate energy losses caused by system failures. Previous works have mostly employed BP neural networks (BPNNs) to predict system faults, but this approach suffers from reduced prediction accuracy due to the inappropriate initialization of weights and thresholds. To address these issues, this paper proposes an improved arithmetic optimization algorithm (AOA) to optimize the initial weights and thresholds in BPNNs. Specifically, we first introduced an improved AOA via multi-subpopulation and comprehensive learning strategies, called MCLAOA. This approach employed multi-subpopulations to effectively alleviate the poor global exploration performance caused by a single elite, and the comprehensive learning strategy enhanced the exploitation performance via information exchange among individuals. More importantly, a nonlinear strategy with a tangent function was designed to ensure a smooth balance and transition between exploration and exploitation. Secondly, the proposed MCLAOA was utilized to optimize the initial weights and thresholds of BPNNs in cluster fault prediction, which could enhance the accuracy of fault prediction models. Finally, the experimental results for 23 benchmark functions, CEC2020 benchmark problems, and two engineering examples demonstrated that the proposed MCLAOA outperformed other swarm intelligence algorithms. For the 23 benchmark functions, it improved the optimal solutions in 16 functions compared to the basic AOA. The proposed fault prediction model achieved comparable performance to other swarm-intelligence-based BPNN models. Compared to basic BPNNs and AOA-BPNNs, the MCLAOA-BPNN showed improvements of 2.0538 and 0.8762 in terms of mean absolute percentage error, respectively.

1. Introduction

In recent years, due to the rapid development of computer technology, computer systems have been widely applied in various industries within the national economy, greatly promoting socioeconomic development. At the same time, higher requirements have been placed on the sustainable and stable operation of computer systems, and people are increasingly concerned about the availability of computer systems [1,2,3,4,5]. Currently, providing continuous and stable services in control cluster systems is an urgent issue in computer cluster technology.
Previous work has mostly focused on load balancing [6,7], resource scheduling [4,8], fault tolerance [9,10], and other aspects of cluster systems. However, recently, limited research has been conducted on improving system stability through fault prediction. Control cluster fault prediction aims to predict node failures at an early stage, enabling proactive resource scheduling and enhancing the availability of the cluster system. Pinto et al. [11] designed a distributed computing system for Hadoop clusters and used SVM for cluster fault prediction. Mukwevho et al. [12] analyzed three fault-tolerant techniques and achieved fault prediction through proactive methods. Das et al. [13] employed log analysis for fault prediction. However, most of these fault prediction approaches required building custom models according to specific requirements and were not widely applicable, and while neural-network-based fault prediction can be widely applied, it suffers from issues such as slow convergence and susceptibility to local optima due to sensitivity to initial weights and thresholds. Fortunately, swarm intelligence approaches can effectively adjust these parameters. The novel research field successfully combines machine learning and swarm intelligence approaches and proved to be able to obtain outstanding results in different areas [14,15]. Therefore, in this work, we design a novel intelligent algorithm and employ it to find the optimal parameters in the neural network prediction model.
The arithmetic optimization algorithm (AOA) is a new metaheuristic algorithm proposed by Abualigah et al. in 2021, which primarily utilizes basic arithmetic operators to perform exploration (multiplication and division) and exploitation (subtraction and addition) [16]. The algorithm’s main advantages lie in its simplicity, ease of programming, and fewer parameters [17], which have led researchers to apply it in various fields, including engineering design [18,19,20], cloud computing [21], and image processing [22], to name a few [23,24,25]. However, the AOA faces challenges in dealing with complex optimization problems, particularly regarding issues of local optima and slow convergence. Recently, improved versions of the AOA have emerged as a trend. For example, Li et al. [18] employed 10 chaotic maps to modify the control parameters and improve the exploration and exploitation stages during the iteration process. However, this approach did not modify the mathematical model, which implies that it may still encounter local optima when faced with complex optimization problems. Çelik [19] employed information exchange [26,27], Gaussian distribution [26], and quasi-opposition [28,29] to propose an information-exchanged Gaussian AOA with quasi-opposition learning (IEGQO-AOA), which improved the convergence performance without significantly increasing the computational complexity of the algorithm. Gölcük et al. [23] employed highly disruptive polynomial mutation and local escaping operators to propose an improved AOA for training feedforward neural networks. These methods [19,23] employed mutation factors to improve the exploration performance of the AOA, enabling it to escape local optima. However, mutation factors may generate solutions that deviate from the optimal solution, thus reducing the convergence speed. Kaveh et al. [25] improved population diversity and convergence performance by modifying the mathematical model in the exploration and exploitation stage of the AOA and applied the improved AOA to structural optimization. Some research works have improved the performance of the AOA by combining it with other meta-heuristic algorithms, such as the sine cosine algorithm (SCA) [20], the salp swarm algorithm (SSA) [21], and the aquila optimizer (AO) [30]. It is worth noting that the aforementioned algorithms improved the global optimization performance by introducing mutation factors, modifying the control parameters, or incorporating other algorithms.
Swarm intelligence algorithms consist of two main phases, namely exploration and exploitation [31,32,33,34]. The purpose of exploration is to search the regions where the global optimum may exist. Exploitation aims to further refine and precisely search the promising regions identified during exploration. It is well known that the key to optimizing performance in swarm intelligence lies in the search capabilities of exploration and exploitation, as well as the balance and transition between these two phases. However, the AOA utilizes multiplication and division operators in the exploration phase and addition and subtraction operators in the exploitation phase, and it revolves around a single elite individual without involving information sharing among individuals. These limitations greatly restrict the exploration and exploitation performance of the algorithm. In addition, a linear mechanism does not accurately reflect the complex optimization process; therefore, it fails to facilitate a smooth transition from the exploration phase to the exploitation phase.
Motivated by the aforementioned analysis, we proposed a novel improved AOA via multi-subpopulation (MS) and comprehensive learning (CL) strategies for global optimization (MCLAOA). Subsequently, the MCLAOA was combined with a BP neural network (BPNN) to form the MCLAOA-BPNN model for cluster fault prediction. Firstly, we proposed the novel MCLAOA. Specifically, the MS was applied in the exploration phase, where we divided the population into several subpopulations, and each subpopulation revolved around its own sub-elite. This strategy alleviated the weakness of a single elite in terms of exploration performance and enhanced population diversity. The CL was used in the exploitation phase to increase the information sharing among individuals and sped up the convergence of the algorithm. In addition, to ensure a smooth transition from the exploration phase to the exploitation phase, a nonlinear math optimizer accelerated (MOA) with a tangent function was employed instead of the standard MOA. After obtaining the high-performance MCLAOA, we combined it with the BPNN to form the MCLAOA-BPNN cluster fault prediction model. The model utilized MCLAOA to obtain the best initial weights and thresholds for BPNN, thereby improving prediction accuracy and providing the foundation for resource scheduling and sustainable operation of cluster systems.
In this work, the main contributions are summarized as follows:
(1)
To enhance the accuracy of cluster fault prediction, we attempted to design a new optimization algorithm and combined it with BPNN to form the MCLAOA-BPNN cluster fault prediction model. The model utilized MCLAOA to optimize the initial weights and thresholds of BPNN.
(2)
To address the lack of individual information sharing in both the exploration and exploitation phases, we proposed the MCLAOA. This approach employed the MS and CL strategies to modify the mathematical models of the exploration and exploitation phases, thereby improving the optimization performance.
(3)
To ensure a smooth transition from the exploration phase to the exploitation phase for the MCLAOA, we designed a nonlinear MOA with tangent functions to replace the linear mechanism in the standard AOA.
(4)
Experimental results over 23 benchmark functions, CEC2020 benchmark problems, and two engineering examples showed that the proposed MCLAOA has much stronger merit. In addition, the MCLAOA-BPNN had better prediction accuracy compared to other algorithms.
The remainder of this paper is structured as follows: The standard AOA is introduced in Section 2, and the proposed MCLAOA is presented in Section 3. In Section 4, results and analysis of the proposed algorithm are presented using 23 benchmark functions, CEC2020 benchmark problems, and two engineering design problems. Section 5 presents the MCLAOA-BPNN model for cluster fault prediction and compares it with other models. At last, the conclusion of this paper is provided in Section 6.

2. Arithmetic Optimization Algorithm (AOA)

Inspired by the arithmetic operators, the AOA is proposed as a new intelligent algorithm. The basic principle of AOA is shown in Figure 1, which is mainly divided into the exploration phase and the exploitation phase [16].

2.1. Math Optimizer Accelerated (MOA)

Before optimization, the  M O A  is designed to determine whether the population is performing the exploration phase or the exploitation phase. Here, given a random number  r 1  between 0 and 1, the global exploration phase is executed if  r 1 < M O A , otherwise, the local exploitation phase is executed. The MOA can be formulated as:
M O A t = M i n + t × M a x M i n T ,
where t and T are the current iteration and the maximum number of iterations, respectively.  M a x  and  M i n  represent the maximum value and minimum value of the accelerated function.

2.2. Exploration Phase

In this phase, the AOA mainly adopts division and multiplication search strategy to find better candidate solutions, and the mathematical model is as follows:
X i , j = b e s t X j ÷ M O P + × S j , r 2 < 0.5 , b e s t X j × M O P × S j , o t h e r w i s e ,
M O P t = 1 t 1 1 α α T 1 1 α α ,
S j = U B j L B j × η + L B j ,
where  X i , j  represents the position of the  j t h  dimension of the  i t h  individual, and  b e s t X j  is the  j t h  dimension in the best solution of all individuals. is a small integer number that prevents the denominator from being 0.  M O P t  is a parameter representing the step size factor of the current iteration,  S j  denotes the step size of the  j t h  dimension, and  r 2  is a random number in [0,1].  U B j  and  L B j  represent the upper and lower bound value of the  j t h  dimension, respectively.  η  is the control parameter that regulates the search process, and  α  is a sensitive parameter.  η  and  α  are set to 0.5 and 5, respectively, according to the literature [16].

2.3. Exploitation Phase

This phase performs the local exploitation. Additive and subtractive operators are adopted to search for the optimal solution. Specifically, given a random number  r 3  between 0 and 1, if  r 3 < 0.5 , the subtractive operator is employed to search for the optimal solution, otherwise, the additive operator is employed to update the population position, which can be expressed as follows:
X i , j = b e s t X j M O P × S j , r 3 < 0.5 , b e s t X j + M O P × S j , o t h e r w i s e .

3. Proposed Method

From the mathematical model of AOA, it can be seen that all individuals perform expansion or reduction operations around the elite ( b e s t ) in the exploration phase, which affects the search ability of the AOA. The exploitation phase employs a fixed step factor ( S j ) without memory retention, which can lead to a lack of information exchange between individuals and reduce convergence effectiveness. In addition, the MOA with a linear mechanism cannot solve complex optimization problems. To deal with the above shortcomings, an improved AOA was proposed to solve the global optimization problem by MS strategy and CL strategy. Compared with the standard AOA, three operators, MS, CL and nonlinear MOA with tangent function, were introduced in this paper. The specific mathematical model is as follows.

3.1. Multi-Subpopulation (MS) Strategy

Global search refers to identifying the optimal region for the target within a larger search space to prevent the algorithm from getting trapped in local optima [35,36]. However, according to Equation (2), it is known that all individuals explore the search space based on the  b e s t X j  and a fixed step size factor ( S j ), which reduces population diversity and exploration performance. Therefore, we propose the MS strategy to improve the exploration performance of AOA, as shown in Figure 2.
To be specific, first a population size  N p  is given, which is divided into p subpopulations. Second, all individuals are evaluated for fitness, and the individual with the minimum fitness value in each group is selected as the sub-elite, forming a sub-elite group. Then, all individuals except for the sub-elite group are rearranged into p groups to explore the search space. Finally, each sub-elite is randomly assigned to different groups. Here, the sub-elite group  g b e s t  can be expressed as Equation (6),
g b e s t = g b e s t 1 g b e s t 2 g b e s t p s u b e l i t e = arg f b e s t ( [ x 1 , 1 , x 1 , 2 , , x 1 , q ] ) arg f b e s t ( [ x 2 , 1 , x 2 , 2 , , x 2 , q ] ) arg f b e s t ( [ x p , 1 , x p , 2 , , x p , q ] ) ,
where p is the number of sub-elite groups, q represents the number of individuals contained in each sub-population,  arg f b e s t  denotes the inverse function of fitness evaluation, and  g b e s t k k = 1 , 2 , , p  represents the position of the  k t h  sub-elite individual in the sub-elite group. By introducing the MS strategy, the diversity of the population can be ensured, the exploration ability can be increased in the search space, and the algorithm can be prevented from falling into local optima.

3.2. Comprehensive Learning (CL) Strategy

After finding some promising solutions during the exploration phase, local exploitation means attempting to delve deeper into these solutions to find better ones [35,36]. However, according to Equations (4) and (5), it can be seen that the position update rule during the exploration phase involves upper and lower bounds without any information exchange between individuals. That is, all individuals only affect the convergence rate by increasing or decreasing a fixed step factor. Inspired by the CL particle swarm optimizer (CLPSO) [37], the CL strategy is used during the exploitation phase. On the one hand, the CL strategy can preserve individual historical information and facilitate information sharing. On the other hand, the population does not learn from all dimensions of a single individual, but rather from different dimensions of the entire population. The learning probability for each dimension is determined based on a probability value [37]. The learning probability for the  i t h  individual can be described as follows:
P i c l = 0.05 + 0.45 exp 10 i 1 10 i 1 N p 1 N p 1 1 exp 10 1 ,
where  N p  is population size. The pseudo-code of the MCLAOA is shown in Algorithm 1.
Algorithm 1 Pseudo-code of the CL strategy
1: for i = 1:  N p
2:    Generate random number r
3:    Compute the learning probability value ( P i c l ) using Equation (7).
4:    Give the index of two random individuals,  f i , j  and  f i , j .
5:    if  r < P i c l
6:       if  f X f i , j < f X f i , j
7:           X f i , j = X f i , j
8:       Else
9:           X f i , j = X f i , j
10:       End if
11:    Else
12:        X f i , j = X i , j
13:    End if
14: End for

3.3. Improved AOA with MS and CL (MCLAOA)

Based on the above analysis, the MS and CL strategies were introduced into the standard AOA to increase population diversity and improve the convergence performance. For the the MCLAOA, the MS strategy was only applied in the exploration phase, while the CL strategy was only applied in the exploitation phase. The specific details of these improvements will be introduced in the following subsection.
Exploration phase: In this phase, considering that expanding or shrinking by a certain proportion always limits the exploration performance of AOA, inspired by the teaching-learning-based optimization (TLBO) [38], we introduced the teaching phase of TLBO. Specifically, half of the subpopulations adopted the exploration phase of AOA, and the rest adopted the teaching phase.
For the first half of the subpopulation specifically, the MS strategy was introduced in Section 3.1. We employed multiple sub-elites to replace a single elite, increasing the diversity of solutions generated and preventing premature convergence due to local optima issues. We applied Equation (6) to Equation (2), and the modified Equation (2) can be described as:
X i , j k = g b e s t k X j ÷ M O P + × S j , r 2 < 0.5 , g b e s t k X j × M O P × S j , o t h e r w i s e ,
where  X i , j k  represents the position of the  j t h  dimension of the  i t h  individual in the  k t h  group.
For the second half of the population,
X i , j = X i , j + r a n d × b e s t X j T F × X a v e ,
where  X a v e  is the average value of all individuals, and  T F  denotes the teacher factor ( T F = r o u n d [ 1 + r a n d ( 0 , 1 ) ] ).
Exploitation phase: The standard AOA cannot retain memory during the exploitation phase, which lead to slow convergence. The CL strategy was introduced into Equation (5), individuals can exchange information, speeding up the algorithm’s convergence to the global optimum. Therefore, the specific modification is shown in Equation (10):
X i , j = b e s t X j M O P × X f i , j X i , j , r 3 < 0.5 , b e s t X j + M O P × X f i , j X i , j , o t h e r w i s e ,
where  f i , j  defines the value of the  i t h  individual in the  j t h  dimension, which determines whether the individual  X f i , j  learns in its own or other individuals’ different dimensions. For choosing its own or other individual’s dimension, it depends on the learning probability  P i c l  in Equation (7). If the random number r is greater than  P i c l , it is learned from its own dimension  X f i , j , otherwise it occurs from another individual’s dimension  X f i , j .
Modified MOA: The parameter  M O A , which varies linearly with the number of iterations, cannot reflect the real optimization problem. Therefore, this paper modifies MOA using non-linear parameters with a tangent function, as shown in Figure 3. The tangent function is introduced into Equation (1), and its modified  M O A  can be expressed as:
M O A t = M i n + M a x M i n × tan 0.25 t T π .
As can be seen from Figure 3, compared with the original  M O A M O A  can better balance and transition the exploration and exploitation.
In summary, the MS and teaching strategies were applied in the exploration phase. The MS improved population diversity and enabled faster global search. Furthermore, the search method in the exploration phase is limited by a step factor determined by the upper and lower bounds, which constrains the algorithm’s search capability. Therefore, we introduced the teaching phase of TLBO [38], where collective information from all individuals was used to mitigate this limitation. The CL strategy was applied in the exploitation phase of the algorithm, accelerating the convergence to the global optimum through information exchange among individuals. This strategy addressed the slow convergence issue caused by the addition and subtraction operators. In addition, we designed Equation (11) to effectively balance and smoothly transition between exploration and exploitation. We denote the improved AOA as MCLAOA. The detailed flowchart of the proposed MCLAOA is described in Figure 4. For clarity, the main contributions of this paper are highlighted, including  M O A , the exploration phase, and the exploitation phase. In addition, the pseudo-code of MCLAOA is shown in Algorithm 2.
Algorithm 2 Pseudo-code of the proposed MCLAOA
1: Initialize: population size  N p , position  X i , j , parameters  η  and  α , the maximum number of iterations T, p group (sub-population).
2: Update:
3: While  t < T
4:    Calculate the Fitness Function for the given solutions.
5:    Find the best solution (Determined best so far).
6:    Update the MOA’ value and MOP value using Equations (11) and (3).
7:    for i = 1:  N p
8:       for j = 1 to Positions do
9:          Generate a random value between [0, 1] ( r 1 r 2 , and  r 3 )
10:          if  r 1 > M O A  then
11:             %%%Exploration phase%%%
12:             For the first half of the subpopulation,
13:             if  r 2 > 0.5  then
14:                (1) Apply the Division math operator (D “÷”)
15:                Update the  i t h  solutions’ positions using the first rule in Equation (8).
16:             Else
17:                (2) Apply the Division math operator (M “ ×”)
18:                Update the  i t h  solutions’ positions using the first rule in Equation (8).
19:             End if
20:             For the second half of the subpopulation,
21:             Update the  i t h  solutions’ positions using Equation (9).
22:          Else
23:             %%%Exploitation phase%%%
24:             Generate random number r
25:             Compute the learning probability value ( P i c l ) using Equation (7).
26:             Decide whether  X f i , j  is its own or another individual.
27:             if  r 3 > 0.5  then
28:                (1) Apply the Subtraction math operator (S “-”).
29:                Update the  i t h  solutions’ positions using the first rule in Equation (10).
30:             Else
31:                (2) Apply the Addition math operator (A “+”).
32:                Update the  i t h  solutions’ positions using the second rule in Equation (10).
33:             End if
34:          End if
35:       End for j
36:    End for i
37:     t = t + 1
38: End While
39: Output: Return the best solution  b e s t .

3.4. Computational Complexity

Compared to the standard AOA, the proposed MCLAOA mainly introduces the MS strategy, CL strategy, and the modified MOA. Considering that the MS strategy involves sub-elite groups, it is necessary to conduct a fitness evaluation ranking, denoted as  O N p · log N p . The computational complexity of the CL strategy is  O N p · D , where D is the dimension. The computational complexity of the modified MOA is almost unchanged compared to the original MOA. Therefore, the computational complexity of the proposed MCLAOA is  O M C L A O A = O N p · D + O T · N p · D + N p · log N p . If T is much greater than 1, then  O M C L A O A O T · N p · D + log N p .

4. Results and Analysis

The experiments were conducted using MATLAB2017b, and they were run on a PC with Intel Core i7-10700 2.90GHz and 16GB RAM. To examine the performance of the proposed MCLAOA, the 23 benchmark functions and 2 engineering design problems were employed. Among them, 23 benchmark functions [16] are shown in Table 1.
To verify the advanced performance of the proposed MCLAOA, comparisons were made with several algorithms, including (1) some versions of AOA, such as the AOA [16], the chaotic AOA (CAOA) [18], (2) advanced metaheuristic algorithms, such as the reptile search algorithm (RSA) [39], the whale optimization algorithm (WOA) [40], and the grasshopper optimization algorithm (GOA) [41], (3) the classical metaheuristic algorithm and particle swarm optimization (PSO) [42], and (4) the winner of CEC competition, the L-SHADE [43]. For some versions of AOA, the AOA was employed to validate the effectiveness of MCLAOA, while the CAOA was employed to test the strong competitiveness of MCLAOA compared to the versions of AOA. It is worth noting that the CAOA [18] has been proven to outperform some advanced algorithms, including the HHO [44], the EO [45], and the WHO [46], in certain optimization problems. For advanced and classical metaheuristic algorithms, these comparative algorithms can confirm that the MCLAOA achieves state-of-the-art performance and outperforms classical algorithms of the same type. For the winner of CEC competition, once it is confirmed that the MCLAOA outperforms the LSHADE, it can be classified as a high-performance optimizer. All algorithms were set with parameters as shown in Table 2. For the sake of fairness in comparison, the maximum function evaluation with a population size of  N p = 50  and  F E S =  100,000 was selected for 23 benchmark functions. All algorithms were independently run 30 times on each test function. To compare the superiority and inferiority of these algorithms, the evaluation indicators used were the average ( a v e ) and standard deviation ( s t d ) as well as the best optimal value ( b e s t ), and convergence curves were used to indicate the convergence performance of the algorithms. Box plots were adopted to verify the stability of the algorithms. The Wilcoxon rank-sum test and Friedman rank test were employed to reflect the statistical significance of the algorithms [47]. Next, experimental analysis was conducted on 23 benchmark functions.

4.1. Results Comparisons Using 23 Benchmark Functions

The 23 benchmark functions [16] are a classic function benchmark for evaluating optimization algorithms which can be divided into three types: unimodal functions ( F 1 F 7 ), multimodal functions ( F 8 F 13 ), and fixed-dimension multimodal functions ( F 14 F 23 ). For details on the 23 benchmark functions, please refer to Table 1. The experimental results of all algorithms with  a v e s t d , and  b e s t  are shown in Table 3, and the best results of each function are marked in bold type.

4.1.1. Unimodal Functions and Exploitation

The unimodal functions ( F 1 F 7 ) have only one global solution and no local solution, which is used to test the exploitation ability of the algorithm. It can be seen from Table 3 that the proposed MCLAOA has stronger advantages in terms of  a v e  value compared to other comparison algorithms, except for  F 5  and  F 6 . For  F 1 F 4 , both the MCLAOA and the CAOA achieve convergence with an  a v e  value of 0, while the RSA also converges to 0. However, the AOA fails to converge to 0 in terms of the  b e s t  metric. These results indicate that the exploitation performance of MCLAOA has been significantly improved. This is attributed to the introduction of the CL strategy which modifies the mathematical model of the exploitation phase and enhances the convergence to the optimal solution by sharing information among individuals.

4.1.2. Multimodal Functions and Exploration

The multimodal function contains multiple local optimal solutions, which are used to test the algorithm’s ability to escape from poor local optima and obtain the near-global optimum. For multimodal functions ( F 8 F 13 ) with a dimension of 30, the  a v e  value of the proposed MCLAOA ranks first except for  F 12  and  F 13 . Compared with high-dimension multimodal functions ( F 8 F 13 ), fixed-dimension multimodal functions ( F 14 F 23 ) have only a few local minima, and the dimension of the function is small and fixed. It is worth noting that the  b e s t  values of all algorithms converge to the global optimum for  F 16 F 19 , and for  F 20 F 23 , all algorithms except RSA also converge to the global optimum. These results demonstrate their ability to converge to the global optimum. However, when considering  a v e  and  s t d  values together, the proposed MCLAOA exhibits superior performance, indicating its ability to converge more stably to the global optimum. Therefore, it can be seen from the experimental results of multimodal functions that the proposed MCLAOA has good global exploration performance. The MS strategy divides the population  N p  into p subpopulations by introducing p sub-elites instead of a single elite. This strategy enhances the global search capability. Additionally, we introduce a teaching phase to half of the subpopulations, which alleviates the limitations of expansion or shrinkage step size factor in the exploration phase of AOA. As a result, the proposed MCLAOA outperforms other algorithms, especially in  F 8 F 21 F 23 .

4.1.3. Convergence Behavior Analysis

To observe the convergence of the proposed MCLAOA and comparison algorithms, we record and save the fitness of the best solution for each iteration to draw the convergence curve. The convergence results of all algorithms on 23 benchmark functions are shown in Figure 5. It can be seen from Figure 5 that only the CAOA, the RSA, and the MCLAOA show a clear downward trend on  F 1 F 4 , and the proposed MCLAOA shows a more obvious decrease for  F 3  and  F 4  compared to the CAOA and the RSA. For multimodal functions  F 8 F 11 , the proposed MCLAOA achieves a significantly faster convergence to the global optimum compared to the other seven comparison algorithms. All algorithms can converge to the global optimum for  F 16  and  F 17 , but the convergence curve of the proposed MCLAOA drops faster than the AOA and the CAOA. Moreover, the proposed MCLAOA ranks first in convergence performance on  F 21 F 23  among all comparison algorithms, indicating its good exploitation performance. In summary, the proposed MCLAOA has achieved advanced performance. Compared with the basic AOA and the improved version CAOA, the proposed MCLAOA has improved convergence performance. In addition, since the comparison algorithms are meta-heuristic algorithms with randomness, in this paper, we employ Box plots to analyze the stability of the results. Box plots of the result of a global minimum of the MCLAOA, the AOA, the CAOA, the RSA, the WOA, the GOA, the PSO and the L-SHADE for  F 1 F 7 F 11 , and  F 23  are shown in Figure 6. It can be observed from Figure 6 that the proposed MCLAOA outperforms other algorithms in the stability of the results during the running of the algorithm.
Based on the above analysis, the proposed MCLAOA shows strong advantages in terms of convergence accuracy, convergence speed and robustness.

4.2. Statistical Analysis

It is worth mentioning that statistical analysis is very important for the statistical authenticity of results in the field of optimization algorithms. In this paper, the Wilcoxon rank-sum test and the Friedman test are employed.
The statistical results of 23 benchmark functions are shown in Table 4. From the data results in Table 4, it can be observed that the proposed MCLAOA performs better than other comparison algorithms in most functions among the 23 benchmark functions. The number of functions in which the performance is improved compared to the basic AOA and improved CAOA is 16 and 9, respectively.
In order to compare the results of each run and determine the significance of the results, a non-parametric pair-wise Wilcoxon rank-sum test has been employed. The tests were conducted at a significance level of 5%. For the Wilcoxon rank-sum test, the best-performing algorithm was chosen in each test function and compared to other algorithms. That is, if the best algorithm is MCLAOA, pair-wise comparisons are made between MCLAOA and AOA, MCLAOA and CAOA, MCLAOA and RSA, etc. Note that since the best algorithm cannot be compared with itself; N/A is written for the best algorithm in each function to indicate that it is not applicable. The results are presented in Table 5. It is evident from Table 5 that these results are statistically significant, as the p-values are significantly less than 0.05 for almost all functions.
In order to calculate the ranking of each algorithm with statistical significance, we conducted a Friedman rank test for all tested algorithms over 23 benchmark functions, and the test results are shown in the last two rows of Table 3. The proposed MCLAOA algorithm ranked first among all algorithms with a Friedman value of 2.1034. Through a series of experiments, it has been verified that the proposed MCLAOA can be regarded as an advanced optimizer with statistically significant results.

4.3. Scalability Analysis

This section uses scalability analysis to verify the reliability of the proposed MCLAOA. Considering that the dimensions of fixed-dimension multimodal functions ( F 14 F 23 ) are fixed and cannot be changed, this paper selects one function from unimodal functions and multimodal functions, respectively, for analysis, namely  F 1  and  F 10 . In the process of scalability analysis, the dimension ranges from 100 to 500, with a step size of 100. The termination condition (i.e.,  F E S = 100 , 000  with  N p = 50 ) and parameter settings are consistent with the above experimental conditions, and each function is independently executed 30 times at each dimension. The experimental results are shown in Figure 7, where the x-axis represents the dimension and the y-axis represents the average fitness value obtained from 30 independent runs at each dimension. It is worth noting that the red dashed box represents the enlarged content.
From Figure 7, it can be seen that for  F 1 , almost all algorithms can converge to the global optimum in all dimensions, except for the GOA, the PSO, and the L-SHADE. For  F 11 , the average fitness of MCLAOA, RSA, and WOA are very close as the dimension increases. At dimension 500, the average fitness of MCLAOA is slightly lower than the RSA and the WOA, which is determined by their mathematical models. However, compared to the AOA and the CAOA, the MCLAOA performs the best in all dimensions, which strongly proves that the proposed MCLAOA has been greatly improved. These results demonstrate that the proposed MCLAOA is reliable, especially compared to the AOA and the CAOA, and exhibits excellent performance even when facing high-dimensional optimization problems.

4.4. Results Comparisons Using CEC2020 Benchmark Problems

To further verify the strong competitiveness and optimization applicability of the proposed MCLAOA, we selected a more complex functional benchmark (CEC2020 benchmark problems [48]) and advanced comparison algorithm (the slime mould algorithm, SMA [49] and the hybridizing TLBO with GOA, TLGOA [50]). Due to space limitations, the detailed description and experimental results of CEC2020 are represented in the Appendix A. These CEC2020 benchmark problems can be divided into four types: unimodal functions ( C F 1 ), basic functions ( C F 2 C F 4 ), hybrid functions ( C F 5 C F 7 ), and composition functions ( C F 8 C F 10 ). For details on the CEC2020 benchmark problems with a dimension of 10, please refer to Table A1. The parameter settings of SMA and TLGOA are consistent with the original literature [49,50]. Among all the algorithms, the maximum function evaluation with a population size of  N p = 100  and  F E S = 100 , 000  for CEC2020 benchmark problems, and they were independently run 30 times. Similar to the 23 benchmark functions, we also adopt  a v e s t d , and  b e s t  as evaluation metrics to assess the optimization performance of all algorithms. The experimental results are shown in Table A2, and the best results are marked in bold type.
From Table A2, it can be observed that the MCLAOA shows the best performance in terms of  a v e  values on  C F 2 C F 8 C F 9 , and  C F 10 . The TLGOA performs the best on  C F 4 C F 5 , and  C F 7 . The SMA exhibits the best performance on  C F 1 C F 3 , and  C F 6 . Furthermore, compared to the standard AOA, the MCLAOA demonstrates significantly better  a v e  values, indicating its effectiveness in improving optimization performance. It is worth noting that the Friedman rank test for the MCLAOA in Table A2 is 2.3000, ranking first. The p-values corresponding to the Wilcoxon rank-sum test in Table A3 are almost all significantly smaller than 0.05. These results demonstrate that the experimental results obtained from Table A2 are statistically significant. Therefore, the proposed MCLAOA also demonstrates promising performance on more complex benchmark problems.
In summary, we comprehensively analyzed the optimization performance of MCLAOA from several aspects, including accuracy, convergence curve, box plots, statistical analysis, and scalability analysis. These results also lay the foundation for the application of the algorithm to solve more complex optimization problems.

4.5. Engineering Design Problem

To date, we have analyzed the performance of the proposed MCLAOA on unconstrained function benchmarks. Next, we will discuss optimization problems under complex constraint conditions in real-world scenarios. In this paper, two engineering examples, three-bar truss design and a pressure vessel design, are employed to analyze the proposed MCLAOA.

4.5.1. Three-Bar Truss Design Problem

The objective of the three-bar truss design problem is to minimize the weights of the bar structures under certain constraints [16]. The three-bar truss design mainly involves two optimization parameters: the cross-sections with  A 1  and  A 2 . There are three constraint conditions, and the mathematical model is shown in the following equations. Here, we compare the proposed MCLAOA with some existing optimization algorithms, and the experimental results are shown in Table 6. The experimental results demonstrate that the proposed MCLAOA exhibits strong competitiveness.
Take  x = [ x 1 x 2 ] = [ A 1 A 2 ] ,
Min.  f ( x ) = ( 2 2 x 1 + x 2 ) l ,
Subject to  g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 3 ( x ) = 1 2 x 2 + x 1 P σ 0 ,
where  l = 100   cm , P = 2   kN / cm 2 , σ = 2   kN / cm 2 , 0 x 1 1 , 0 x 2 1 .

4.5.2. Pressure Vessel Design Problem

The objective of the pressure vessel design problem is to determine the total cost of a cylindrical pressure vessel and minimize the result [16]. The pressure vessel design involves four design variables: the inner radius (R), the thickness of the head ( T h ), thickness of the shell ( T s ), and the length of the cylindrical part without examining the head (L). There are four constraints, and the mathematical model can be represented by the following equations. Here, we will compare the proposed MCLAOA with some existing optimization algorithms in terms of pressure vessel design, and the experimental results are shown in Table 7. It can be clearly seen that the proposed MCLAOA has significant advantages in solving the pressure vessel design problem.
Take   x = [ x 1 x 2 x 3 x 4 ] = [ T s T h R L ] ,
Min.  f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 ,
Subject to  g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 3 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 2 + 129600 0 , g 4 ( x ) = x 4 240 0 , 0 x 1 , x 2 99 , 10 x 3 , x 4 200 .

5. Application of MCLAOA-BPNN for Cluster Fault Prediction

Due to the rapid development of computer technology, computer systems are widely used in various industries of the national economy [54,55]. Most current software systems can be viewed as cluster systems, which are parallel or distributed systems composed of a large number of independent computers [56]. As the number of nodes in cluster systems continues to increase, the frequency of node failures also increases, which will seriously affect normal usage. In recent years, although existing research work [57,58] in cluster system fault prediction has achieved good results, further improvement is needed in terms of prediction accuracy and efficiency. Therefore, this paper proposed MCLAOA to optimize BPNN parameters and design an MCLAOA-BPNN control cluster fault prediction method.

5.1. Control Cluster System

To meet the demand of uninterrupted and reliable running of multiple computing jobs, a high-availability control cluster system is constructed. The network architecture of the system is shown in Figure 8. The high-availability cluster for multiple computing jobs consists of one central monitoring station and m computing units. Among them, the central monitoring station is responsible for simulating two virtual computers (A and B are backups of each other), completing the monitoring of the high-availability cluster. Each computing unit is responsible for simulating n virtual computers, completing the simulation of computing jobs, dynamic task allocation, migration, and other high availability cluster equipment functions. However, to ensure the sustainable operation of the cluster system, cluster fault prediction is crucial. Therefore, this paper designed a cluster fault prediction method based on BPNN, and used the proposed MCLAOA to optimize the parameters in the BPNN, thus improving the accuracy of cluster fault prediction.
For implementation details, the functions that control the cluster system were implemented using C++. Qt Creator was utilized to showcase these functions via a graphical interface, which also allowed for fault injection to observe the resource information of each PC. The proposed MCLAOA-BPNN was executed on MATLAB, providing fault prediction for the entire cluster system.

5.2. Cluster Fault Prediction Based on MCLAOA-BPNN

5.2.1. BP Neural Network

The computation process of the BPNN consists of a forward computation process and a backward computation process, which includes the input layer, the hidden layer, and the output layer. The BPNN inputs the data from the input layer, processes the data in the hidden layer, and then calculates the difference between the processed data and the true data. If the obtained result does not meet the set error value, it enters the backpropagation process. During backpropagation, the weights and thresholds in each layer of neurons constantly change until the set error value or the predetermined number of training times is reached. The main process of BPNN is as follows:
Step 1:
Parameter initialization: the number of nodes in the input layer, the hidden layer, and the output layer, as well as the initial weights and thresholds of each neuron.
Step 2:
Forward propagation.
Step 3:
Calculation of the error value between the output data and the expected data.
Step 4:
Update of the weights and thresholds.
Step 5:
Check whether the error value meets the set value. If not, return to Step 4 and update the weights and thresholds until the set error value is reached or the maximum training times are reached.

5.2.2. MCLAOA Optimizes BPNN

During the training process of BPNN, the initial weights and thresholds are randomly generated, which will affect the prediction performance of the model. Therefore, this paper adopts the proposed MCLAOA to optimize the weights and thresholds in BPNN, called MCLAOA-BPNN. The flowchart of the proposed MCLAOA-BPNN cluster fault prediction method is shown in Figure 9, where the red dashed box is the proposed MCLAOA, and the blue dashed box is the MCLAOA-BPNN fault prediction model proposed in this chapter. The specific implementation process is as follows:
Step 1:
Analyze the cluster system, determine the fault prediction indicators that affect the cluster system based on the network structure of the cluster system, and construct feature vectors.
Step 2:
Initialize the weights and thresholds of BPNN, the parameters of MCLAOA, and read the initial index data of the cluster system as the initial sample data.
Step 3:
Pre-process the sample data.
Step 4:
Use the MCLAOA to optimize the weights and thresholds of the BPNN and construct the MCLAOA-BPNN fault prediction model.
Step 5:
Check whether the termination condition is met. If the termination condition is met, the optimal weights and thresholds are output. Otherwise, skip to Step 4.
Step 6:
The optimized weights and thresholds are used as the weights and thresholds of the MCLAOA-BPNN model.

5.3. Experimental Results and Analysis

The high-availability cluster system constructed in this paper has a total of 42 nodes, including 2 central nodes and 40 computing nodes, i.e.,  m × n = 40 . The operating system is Ubuntu 16.04. For the proposed MCLAOA-BPNN fault prediction model, all experiments were performed on MATLAB 2017b, and they were run on a PC with Intel Core i7-10700 2.90 GHz and 16 GB RAM.
We used six main factors that affect cluster performance as sample data, including CPU consumption, memory usage, operating system processes load, net traffic, I/O operations, and number of processes. To simulate faulty behavior, we injected node failures, program errors, network faults, and performance anomalies to obtain fault data. In the data collection process, we collected sample data from 50 moments, normalized the sample data, and used 40 moments as training data and 10 moments as testing data. All data were collected from our own and benchmark (https://ieee-dataport.org/open-access/big-data-machine-learning-benchmark-spark, accessed on 6 June 2019) [59].

5.3.1. Evaluation Criteria

To better evaluate the results of the data, we employed mean absolute error ( M A E ), root mean square error ( R M S E ), and mean absolute percentage error ( M A P E ) as evaluation metrics for the model [60]. The  M A E  can provide a measure of the overall accuracy of the predictions. The  R M S E  gives more weight to larger errors. The  M A P E  provides a relative measure of the prediction accuracy. The  M A E  and the  M A P E  are mainly used to measure the degree of difference between predicted values and true values, where the smaller the value, the higher the prediction accuracy of the model. The  R M S E  represents the degree of fluctuation in the difference, where the smaller the value, the more stable the prediction results. The formulas for calculating the  M A E , the  R M S E , and the  M A P E  are as follows:
M A E = 1 N i = 1 N y i y i ,
R M S E = 1 N i = 1 N ( y i y i ) 2 ,
M A P E = 1 N i = 1 N y i y i y i ,
where N denotes the number of observations,  y i  represents the predicted value and  y i  is the true value.

5.3.2. Compared Algorithms and Parametric Setup

To evaluate the performance of the proposed MCLAOA-BPNN model, we compared it with the basic BPNN model and swarm-optimized BPNN models (such as the PSO [42], the AOA [16], the CAOA [18], and the sine cosine algorithm (SCA) [61]). In all experiments, the parameter settings were as follows: all population sizes were 50, the number of iterations was 500, and other parameters were set to default values. Additionally, based on the influencing factors, the input layer of the cluster fault prediction model in this paper was set to 6 and the output layer was set to 1. However, the number of hidden layers was not specified, but it is crucial for prediction accuracy. Therefore, we trained the MCLAOA-BPNN cluster fault prediction model with a range of hidden layer numbers (5–12), and the average value of 10 test results for  M A E  is shown in Table 8, with the best results marked in bold type. It can be seen from Table 8 that the model’s hidden layer was set to 7. Therefore, the final architecture is determined to be a three-layer MCLAOA-BPNN with a configuration of 6-7-1. Furthermore, Figure 10 demonstrates that the proposed model has converged after six epochs, where one epoch refers to the number of training iterations, representing one forward propagation and one backward propagation of the BPNN.

5.3.3. Comparison with other BPNN Models

To verify that the proposed model is highly competitive, we compared MCLAOA-BPNN with other fault prediction models, including the BPNN [62], the PSO-BPNN [42], the AOA-BPNN [16], the CAOA-BPNN [18], and the SCA-BPNN [61]. The errors between the predicted values and true values of different prediction models on different sample data are shown in Figure 11. It can be seen from Figure 11 that the prediction accuracy of BPNN has been improved by swarm-optimized BPNN. The proposed MCLAOA-BPNN shows significant performance, especially compared to the AOA-BPNN and the CAOA-BPNN.
In order to clearly observe results,  M A E R M S E , and  M A P E  were employed, as shown in Table 9. From Table 9, it can be seen that compared with the AOA-BPNN and the CAOA-BPNN, the proposed MCLAOA-BPNN improves 1.526/1.236, 1.783/1.283, and 0.8762/0.6111 in terms of  M A E R M S R , and  M A P E . Compared with the basic BPNN and the other swarm-optimized BPNN, the proposed MCLAOA-BPNN’s prediction accuracy also ranks first. These results demonstrate that our model can better perform cluster fault prediction.

6. Conclusions

In the fault prediction of control cluster systems, the improper setting of initial weights and thresholds in a traditional BPNN can lead to low accuracy. To address this issue, this paper proposes a new swarm intelligence algorithm called MCLAOA and utilizes MCLAOA to optimize the initial weights and thresholds of BPNN, constructing the MCLAOA-BPNN control cluster fault prediction model. To validate the effectiveness of the proposed MCLAOA, 23 benchmark functions, CEC2020 benchmark problems, and two engineering examples were employed. Furthermore, we compared the proposed MCLAOA-BPNN with other swarm-intelligence-based BPNN models to demonstrate its high prediction accuracy.
The following points present the specific experimental results.
  • It can be observed from Table 5 that the p-values of the Wilcoxon rank-sum test for the compare algorithms are less than 0.05 in most functions. This indicates that the  a v e  and  s t d  obtained by all algorithms in Table 1 have statistical significance.
  • From the last two rows of Table 1, it can be observed that the Friedman rank test of MCLAOA is 2.1034, ranking first. According to the statistical results of the 23 benchmark functions in Table 4, it can be observed that the MCLAOA has improved convergence performance in 16 and 13 functions compared to the basic AOA and LSHADE, respectively.
  • The convergence curve and box plots prove that MCLAOA has a faster convergence speed and better robustness.
  • Scalability analysis confirms that the MCLAOA has strong and stable performance.
  • From Table A2 and Table A3, it can be observed that the MCLAOA exhibits significant advantages on the CEC2020 benchmark problems and demonstrates statistical significance.
  • According to the experimental results of  M A E R M S E , and  M A P E , compared with the basic BPNN/AOA-BPNN, the MCLAOA-BPNN improved by 3.696/1.526, 4.423/1.783, 2.0538/0.8762. Furthermore, the MCLAOA-BPNN outperforms other swarm-intelligence-based BPNN models.
The main limitations of the proposed MCLAOA are as follows: To ensure convergence speed, the MS strategy is only applied in the exploration phase of AOA. Once the algorithm falls into a local optimum during the exploitation phase, it lacks the ability to escape from it. As a result, it exhibits poor performance in handling more complex practical application scenarios such as image processing, engineering design, and other issues. Furthermore, a large number of iterations can increase the computational cost of the fault prediction model. In the future, we plan to introduce mutation operators to enhance the algorithm’s ability to escape local optima. We will also design a convergence monitoring technique to determine whether the desired value has been achieved, whether the expected value is reached, and if so, whether it can terminate iterations and reduce unnecessary losses. Moreover, the proposed MCLAOA can be extended to handle structural optimization, feature selection, etc.

Author Contributions

Conceptualization, T.X. and Z.G.; Methodology, T.X. and Y.Z.; Software, T.X. and Z.G.; Validation, Z.G.; Writing—original draft, T.X.; Writing—review and editing, Z.G. and Y.Z.; Visualization, Y.Z.; Funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No.62072235).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

The following symbols are used in this manuscript:
tCurrent iteration number
TMaximum iteration number
  M a x Maximum value of the accelerated function
  M i n Minimum value of the accelerated function
A small integer
  U B j Upper bound value of the  j t h  dimension
  L B j Lower bound value of the  j t h  dimension
  η Control parameter
  α Sensitive parameter
  S j Step factor
  N p Population size
pSubpopulation size
qNumber of individuals in each subpopulation
  arg f Inverse function of fitness evaluation
  T F Teacher factor
DDimension
mNumber of compute-units
nNumber of PCs in each compute-unit
NNumber of observations

Appendix A

Table A1. CEC2020 benchmark problems.
Table A1. CEC2020 benchmark problems.
TypeNo.Description   f m i n
Unimodal functions   C F 1 Shifted and Rotated Bent Cigar Function (CEC 2017  F 1 )100
Basic functions   C F 2 Shifted and Rotated Schwefel’s Function (CEC 2014  F 11 )1100
  C F 3 Shifted and Rotated Lunacek bi-Rastrigin Function (CEC 2017  F 7 )700
  C F 4 Expanded Rosenbrock’s plus Griewangk’s Function (CEC2017  F 19 )1900
Hybrid functions   C F 5 Hybrid Function 1 ( N = 3 ) (CEC 2014  F 17 )1700
  C F 6 Hybrid Function 2 ( N = 4 ) (CEC 2017  F 16 )1600
  C F 7 Hybrid Function 3 ( N = 5 ) (CEC 2014  F 21 )2100
Composition functions   C F 8 Composition Function 1 ( N = 3 ) (CEC 2017  F 22 )2200
  C F 9 Composition Function 2 ( N = 4 ) (CEC 2017  F 24 )2400
  C F 10 Composition Function 3 ( N = 5 ) (CEC 2017  F 25 )2500
Table A2. Numerical results of MCLAOA with other algorithms using CEC2020 benchmark problem.
Table A2. Numerical results of MCLAOA with other algorithms using CEC2020 benchmark problem.
FunctionCriteriaMCLAOAAOACAOARSAWOAGOASMATLGOA
  C F 1   a v e 4.7251E+086.5860E+093.9179E+081.8028E+102.3351E+052.2205E+098.5186E+035.8672E+06
  s t d 5.2285E+082.4988E+095.0399E+082.7575E+095.7869E+051.7592E+094.0192E+031.2834E+07
  b e s t 1.3534E+071.2835E+091.2241E+034.5193E+091.3832E+041.4507E+04289.69651.3823E+06
  C F 2   a v e 1.3909E+031.7983E+031.8092E+032.3823E+032.1676E+032.1102E+031.5779E+031.9750E+03
  s t d 130.6380183.0234154.0381194.0336287.2063393.2515226.7420283.4210
  b e s t 1.2222E+031.4506E+031.4609E+031.9660E+031.6097E+031.2672E+031.2337E+031.5287E+03
  C F 3   a v e 764.4841804.0515795.0880802.4801779.20798.8384E+02721.4592726.2413
  s t d 17.07836.920111.456013.246526.507867.97375.36598.0244
  b e s t 730.7717794.3927765.6510765.6002741.49188.1811E+02711.5759711.7954
  C F 4   a v e 6.6057E+039.5217E+031.6702E+045.5289E+052.3003E+048.5273E+062.6294E+031.9512E+03
  s t d 4.0780E+037.6283E+031.2394E+047.6822E+054.3024E+041.6047E+071.5176E+0350.5600
  b e s t 2.3001E+032.0331E+031.9303E+039.4937E+032.1231E+033.1979E+051.9041E+031.9075E+03
  C F 5   a v e 4.2807E+034.9853E+044.2818E+034.5244E+051.3354E+053.2091E+049.4876E+033.2977E+03
  s t d 2.2491E+032.8177E+041.0096E+031.3636E+052.1080E+055.5254E+046.7329E+035.6970E+03
  b e s t 2.2322E+031.1207E+043.1850E+034.4142E+045.1664E+032.7526E+031.9067E+031.8024E+03
  C F 6   a v e 1.8286E+031.9828E+031.9183E+032.0728E+031.8357E+033.0539E+031.6663E+031.8319E+03
  s t d 124.5011138.4976144.271093.7524117.3325331.047269.1540132.5534
  b e s t 1.6122E+031.7364E+031.6192E+031.9127E+031.6397E+032.4461E+031.6065E+031.6400E+03
  C F 7   a v e 4.1952E+035.6449E+036.2259E+031.7514E+053.2918E+046.9666E+032.8949E+032.7665E+03
  s t d 1.7962E+032.3694E+032.6621E+031.7081E+053.2594E+047.5302E+031.7753E+03410.4163
  b e s t 2.1461E+032.3547E+032.3133E+037.8506E+034.2554E+032.4166E+032.1203E+032.1776E+03
  C F 8   a v e 2.3265E+032.9057E+032.5958E+032.9702E+032.3315E+035.7708E+032.3266E+032.3581E+03
  s t d 38.6359315.3173117.4490240.231415.34851.9670E+03166.5267237.6234
  b e s t 2.3060E+032.2885E+032.3035E+032.5197E+032.2393E+032.4165E+032.2000E+032.3045E+03
  C F 9   a v e 2.7189E+032.7927E+032.7211E+032.8507E+032.7519E+033.0282E+032.7475E+032.7426E+03
  s t d 103.123788.3431137.863644.496577.625452.788747.5504102.2214
  b e s t 2.5337E+032.5774E+032.5000E+032.7494E+032.5045E+032.9284E+032.5000E+032.4563E+03
  C F 10   a v e 2.9229E+033.2008E+032.9648E+032.9232E+032.9583E+032.9573E+032.9688E+032.9410E+03
  s t d 49.2665145.664135.104623.054731.129262.476555.907229.2071
  b e s t 2.8138E+032.9688E+032.8996E+033.0303E+032.9011E+032.8886E+032.8979E+032.8986E+03
Friedman rank test rank2.30005.60004.30007.40004.70006.60002.60002.5000
16485732
Note: The best results of each function were marked in bold type in terms of  a v e .
Table A3. p-values of the Wilcoxon rank-sum test over CEC2020 benchmark problem.
Table A3. p-values of the Wilcoxon rank-sum test over CEC2020 benchmark problem.
FunctionMCLAOAAOACAOARSAWOAGOASMATLGOA
  C F 1 3.0199E-113.0199E-115.9706E-053.0199E-113.0199E-113.0199E-11N/A3.0199E-11
  C F 2 N/A6.7220E-101.7769E-103.0199E-114.0772E-111.1737E-094.7138E-042.6099E-10
  C F 3 3.3384E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11N/A0.0170
  C F 4 3.0199E-114.9752E-111.4643E-103.0199E-113.0199E-113.0199E-110.8534N/A
  C F 5 6.5277E-087.3891E-111.0702E-093.0199E-111.4643E-105.5727E-108.8829E-06N/A
  C F 6 5.0912E-069.9186E-113.8249E-093.0199E-119.5139E-063.0199E-11N/A1.7294E-07
  C F 7 0.01634.1127E-075.5999E-073.0199E-113.0199E-117.0881E-080.0032N/A
  C F 8 N/A5.5727E-108.8910E-103.0199E-110.14533.3384E-116.1210E-100.3871
  C F 9 N/A5.8737E-040.44632.2273E-090.79583.0199E-110.06570.7845
  C F 10 N/A7.3891E-112.2539E-043.3384E-111.1058E-040.09052.6806E-040.0281
Note: N/A represents the best algorithm in terms of optimization performance among all the algorithms for the corresponding function.

References

  1. Saxena, D.; Gupta, I.; Singh, A.K.; Lee, C.N. A fault tolerant elastic resource management framework toward high availability of cloud services. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3048–3061. [Google Scholar] [CrossRef]
  2. Somasekaram, P.; Calinescu, R.; Buyya, R. High-availability clusters: A taxonomy, survey, and future directions. J. Syst. Softw. 2022, 187, 111208. [Google Scholar] [CrossRef]
  3. Reisizadeh, A.; Prakash, S.; Pedarsani, R.; Avestimehr, A.S. Coded computation over heterogeneous clusters. IEEE Trans. Inf. Theory 2019, 65, 4227–4242. [Google Scholar] [CrossRef] [Green Version]
  4. Wael, K.; Jingwei, H. Cluster resource scheduling in cloud computing: Literature review and research challenges. J. Supercomput. 2022, 78, 6898–6943. [Google Scholar]
  5. Arunarani, A.; Manjula, D.; Sugumaran, V. Task scheduling techniques in cloud computing: A literature survey. Future Gener. Comput. Syst. 2019, 91, 407–415. [Google Scholar] [CrossRef]
  6. Jena, U.K.; Das, P.K.; Kabat, M.R. Hybridization of meta-heuristic algorithm for load balancing in cloud computing environment—ScienceDirect. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 2332–2342. [Google Scholar]
  7. Ghomi, E.J.; Rahmani, A.M.; Qader, N.N. Load-balancing Algorithms in Cloud Computing: A Survey. J. Netw. Comput. Appl. 2017, 88, 50–71. [Google Scholar] [CrossRef]
  8. Luo, Q.; Hu, S.; Li, C.; Li, G.; Shi, W. Resource Scheduling in Edge Computing: A Survey. IEEE Commun. Surv. Tutorials 2021, 23, 2131–2165. [Google Scholar] [CrossRef]
  9. Amin, A.A.; Hasan, K.M. A review of Fault Tolerant Control Systems: Advancements and applications. Measurement 2019, 143, 58–68. [Google Scholar] [CrossRef]
  10. Abbaspour, A.; Mokhtari, S.; Sargolzaei, A.; Yen, K.K. A Survey on Active Fault-Tolerant Control Systems. Electronics 2020, 9, 1513. [Google Scholar] [CrossRef]
  11. Pinto, J.; Jain, P.; Kumar, T. Hadoop Distributed Computing Clusters for Fault Prediction. In Proceedings of the 2016 International Computer Science and Engineering Conference (ICSEC), Chiang Mai, Thailand, 14–17 December 2016; pp. 1–6. [Google Scholar]
  12. Mukwevho, M.A.; Celik, T. Toward a Smart Cloud: A Review of Fault-Tolerance Methods in Cloud Systems. IEEE Trans. Serv. Comput. 2021, 14, 589–605. [Google Scholar] [CrossRef]
  13. Das, D.; Schiewe, M.; Brighton, E.; Fuller, M.; Cerny, T.; Bures, M.; Frajtak, K.; Shin, D.; Tisnovsky, P. Failure Prediction by Utilizing Log Analysis: A Systematic Mapping Study; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  14. Bacanin, N.; Stoean, R.; Zivkovic, M.; Petrovic, A.; Rashid, T.A.; Bezdan, T. Performance of a novel chaotic firefly algorithm with enhanced exploration for tackling global optimization problems: Application for dropout regularization. Mathematics 2021, 9, 2705. [Google Scholar] [CrossRef]
  15. Malakar, S.; Ghosh, M.; Bhowmik, S.; Sarkar, R.; Nasipuri, M. A GA based hierarchical feature selection approach for handwritten word recognition. Neural Comput. Appl. 2020, 32, 2533–2552. [Google Scholar] [CrossRef]
  16. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  17. Yıldız, B.S.; Patel, V.; Pholdee, N.; Sait, S.M.; Bureerat, S.; Yıldız, A.R. Conceptual comparison of the ecogeography-based algorithm, equilibrium algorithm, marine predators algorithm and slime mold algorithm for optimal product design. Mater. Test. 2021, 63, 336–340. [Google Scholar] [CrossRef]
  18. Li, X.D.; Wang, J.S.; Hao, W.K.; Zhang, M.; Wang, M. Chaotic arithmetic optimization algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  19. Çelik, E. IEGQO-AOA: Information-Exchanged Gaussian Arithmetic Optimization Algorithm with Quasi-opposition learning. Knowl. Based Syst. 2023, 260, 110169. [Google Scholar] [CrossRef]
  20. Abualigah, L.; Ewees, A.A.; Al-qaness, M.A.; Elaziz, M.A.; Yousri, D.; Ibrahim, R.A.; Altalhi, M. Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems. Neural Comput. Appl. 2022, 34, 8823–8852. [Google Scholar] [CrossRef]
  21. Mohamed, A.A.; Abdellatif, A.D.; Alburaikan, A.; Khalifa, H.A.E.W.; Elaziz, M.A.; Abualigah, L.; AbdelMouty, A.M. A novel hybrid arithmetic optimization algorithm and salp swarm algorithm for data placement in cloud computing. Soft Comput. 2023, 27, 5769–5780. [Google Scholar] [CrossRef]
  22. Rajagopal, R.; Karthick, R.; Meenalochini, P.; Kalaichelvi, T. Deep Convolutional Spiking Neural Network optimized with Arithmetic optimization algorithm for lung disease detection using chest X-ray images. Biomed. Signal Process. Control. 2023, 79, 104197. [Google Scholar] [CrossRef]
  23. Gölcük, İ.; Ozsoydan, F.B.; Durmaz, E.D. An improved arithmetic optimization algorithm for training feedforward neural networks under dynamic environments. Knowl. Based Syst. 2023, 263, 110274. [Google Scholar] [CrossRef]
  24. Shirazi, M.I.; Khatir, S.; Benaissa, B.; Mirjalili, S.; Wahab, M.A. Damage assessment in laminated composite plates using modal Strain Energy and YUKI-ANN algorithm. Compos. Struct. 2023, 303, 116272. [Google Scholar] [CrossRef]
  25. Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. Structures 2022, 35, 748–764. [Google Scholar] [CrossRef]
  26. Salimi, H. Stochastic fractal search: A powerful metaheuristic algorithm. Knowl. Based Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  27. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  28. Guha, D.; Roy, P.; Banerjee, S. Quasi-oppositional symbiotic organism search algorithm applied to load frequency control. Swarm Evol. Comput. 2017, 33, 46–67. [Google Scholar] [CrossRef]
  29. Truong, K.H.; Nallagownden, P.; Baharudin, Z.; Vo, D.N. A quasi-oppositional-chaotic symbiotic organisms search algorithm for global optimization problems. Appl. Soft Comput. 2019, 77, 567–583. [Google Scholar] [CrossRef]
  30. Mahajan, S.; Abualigah, L.; Pandit, A.K.; Altalhi, M. Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks. Soft Comput. 2022, 26, 4863–4881. [Google Scholar] [CrossRef]
  31. Lalama, Z.; Boulfekhar, S.; Semechedine, F. Localization optimization in wsns using meta-heuristics optimization algorithms: A survey. Wirel. Pers. Commun. 2022, 122, 1197–1220. [Google Scholar] [CrossRef]
  32. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  33. Rahman, M.A.; Sokkalingam, R.; Othman, M.; Biswas, K.; Abdullah, L.; Abdul Kadir, E. Nature-inspired metaheuristic techniques for combinatorial optimization problems: Overview and recent advances. Mathematics 2021, 9, 2633. [Google Scholar] [CrossRef]
  34. Dhal, K.G.; Sasmal, B.; Das, A.; Ray, S.; Rai, R. A Comprehensive Survey on Arithmetic Optimization Algorithm. Arch. Comput. Methods Eng. 2023, 30, 3379–3404. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, H.; Gao, Z.; Zhang, J.; Liu, J.; Nie, Z.; Zhang, J. Hybridizing extended ant lion optimizer with sine cosine algorithm approach for abrupt motion tracking. EURASIP J. Image Video Process. 2020, 2020, 4. [Google Scholar] [CrossRef]
  36. Gao, Z.; Zhuang, Y.; Chen, C.; Wang, Q. Hybrid modified marine predators algorithm with teaching-learning-based optimization for global optimization and abrupt motion tracking. Multimed. Tools Appl. 2023, 82, 19793–19828. [Google Scholar] [CrossRef]
  37. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  38. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  39. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  42. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  43. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  44. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  45. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  46. Naruei, I.; Keynia, F. Wild horse optimizer: A new meta-heuristic algorithm for solving engineering optimization problems. Eng. Comput. 2022, 38, 3025–3056. [Google Scholar] [CrossRef]
  47. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Expert Syst. Appl. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  48. Yue, C.T.; Price, K.V.; Suganthan, P.N.; Liang, J.J.; Ali, M.Z.; Qu, B.Y.; Awad, N.H.; Biswas, P.P. Problem Definitions and Evaluation Criteria for the CEC 2020 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 8–12 July 2020. [Google Scholar]
  49. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  50. Zhang, H.; Gao, Z.; Ma, X.; Zhang, J.; Zhang, J. Hybridizing teaching-learning-based optimization with adaptive grasshopper optimization algorithm for abrupt motion tracking. IEEE Access 2019, 7, 168575–168592. [Google Scholar] [CrossRef]
  51. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Math. Biosci. Eng 2022, 19, 473–512. [Google Scholar] [CrossRef]
  52. Yang, Y.; Gao, Y.; Tan, S.; Zhao, S.; Wu, J.; Gao, S.; Zhang, T.; Tian, Y.C.; Wang, Y.G. An opposition learning and spiral modelling based arithmetic optimization algorithm for global continuous optimization problems. Eng. Appl. Artif. Intell. 2022, 113, 104981. [Google Scholar] [CrossRef]
  53. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  54. Catal, C. Software fault prediction: A literature review and current trends. Expert Syst. Appl. 2011, 38, 4626–4636. [Google Scholar] [CrossRef]
  55. Ahmed, Q.; Raza, S.A.; Al-Anazi, D.M. Reliability-based fault analysis models with industrial applications: A systematic literature review. Qual. Reliab. Eng. Int. 2021, 37, 1307–1333. [Google Scholar] [CrossRef]
  56. Agrawal, A. Concepts for distributed systems design. Proc. IEEE 1986, 74, 236. [Google Scholar] [CrossRef]
  57. Shafiq, M.; Alghamedy, F.H.; Jamal, N.; Kamal, T.; Daradkeh, Y.I.; Shabaz, M. Scientific programming using optimized machine learning techniques for software fault prediction to improve software quality. IET Software 2023, 1–11. [Google Scholar] [CrossRef]
  58. Tameswar, K. Towards Optimized K Means Clustering Using Nature-Inspired Algorithms for Software Bug Prediction. Available online: https://ssrn.com/abstract=4358066 (accessed on 14 February 2023).
  59. Jairson, R.; Germano, V. Big Data Machine Learning Benchmark on Spark. IEEE Dataport 2019. [Google Scholar] [CrossRef]
  60. Al-Musaylh, M.S.; Deo, R.C.; Li, Y.; Adamowski, J.F. Two-phase particle swarm optimized-support vector regression hybrid model integrated with improved empirical mode decomposition with adaptive noise for multiple-horizon electricity demand forecasting. Appl. Energy 2018, 217, 422–439. [Google Scholar] [CrossRef]
  61. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  62. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
Figure 1. Updating toward or away from destination.
Figure 1. Updating toward or away from destination.
Mathematics 11 02891 g001
Figure 2. The flowchart of the MS strategy.
Figure 2. The flowchart of the MS strategy.
Mathematics 11 02891 g002
Figure 3. The MOA with the increase of iterations.
Figure 3. The MOA with the increase of iterations.
Mathematics 11 02891 g003
Figure 4. Flowchart of the proposed MCLAOA.
Figure 4. Flowchart of the proposed MCLAOA.
Mathematics 11 02891 g004
Figure 5. Convergence curves for 23 benchmark functions. Better viewed in color with zoom-in.
Figure 5. Convergence curves for 23 benchmark functions. Better viewed in color with zoom-in.
Mathematics 11 02891 g005
Figure 6. Box plots of the result of a global minimum for functions  F 1 F 7 F 11 , and  F 23 .
Figure 6. Box plots of the result of a global minimum for functions  F 1 F 7 F 11 , and  F 23 .
Mathematics 11 02891 g006
Figure 7. Scalability analysis for functions.
Figure 7. Scalability analysis for functions.
Mathematics 11 02891 g007
Figure 8. The network architecture of high-availability control cluster system.
Figure 8. The network architecture of high-availability control cluster system.
Mathematics 11 02891 g008
Figure 9. The flowchart of the MCLAOA-BPNN cluster fault prediction.
Figure 9. The flowchart of the MCLAOA-BPNN cluster fault prediction.
Mathematics 11 02891 g009
Figure 10. Number of training samples.
Figure 10. Number of training samples.
Mathematics 11 02891 g010
Figure 11. Comparison of predicted absolute error curves of each model.
Figure 11. Comparison of predicted absolute error curves of each model.
Mathematics 11 02891 g011
Table 1. 23 benchmark functions.
Table 1. 23 benchmark functions.
TypeNo.DescriptionR   D i m   f m i n
Unimodal functions   F 1 Sphere   [ 100 , 100 ] 300
  F 2 Schwefel 2.22   [ 10 , 10 ] 300
  F 3 Schwefel 1.2   [ 100 , 100 ] 300
  F 4 Schwefel 2.21   [ 100 , 100 ] 300
  F 5 Rosenbrock   [ 30 , 30 ] 300
  F 6 Step   [ 100 , 100 ] 300
  F 7 Quartic   [ 1.28 , 1.28 ] 300
Multimodal functions   F 8 Schwefel   [ 500 , 500 ] 30−418.98 ×D
  F 9 Rastrigin   [ 5.12 , 5.12 ] 300
  F 10 Ackley   [ 32 , 32 ] 300
  F 11 Griewank   [ 600 , 600 ] 300
  F 12 Penalized   [ 50 , 50 ] 300
  F 13 Penalize 2   [ 50 , 50 ] 300
Fixed-dimension multimodal functions   F 14 Foxholes   [ 65.536 , 65.536 ] 20.9980
  F 15 Kowalik   [ 5 , 5 ] 40.0003
  F 16 Six-hump Camel-Back   [ 5 , 5 ] 2−1.0316
  F 17 Branin   [ 5 , 5 ] 20.398
  F 18 Goldstein-Price   [ 2 , 2 ] 23
  F 19 Hartman 3   [ 1 , 3 ] 3−3.863
  F 20 Hartman 6   [ 0 , 1 ] 6−3.322
  F 21 Shekel5   [ 0 , 10 ] 4−10.153
  F 22 Shekel7   [ 0 , 10 ] 4−10.403
  F 23 Shekel10   [ 0 , 10 ] 4−10.536
Table 2. The parameter setting of the algorithms.
Table 2. The parameter setting of the algorithms.
AlgorithmsParametersValues
MCLAOA α η M a x M i n , p5, 0.5, 1, 0.2, 5
AOA [16] α η M a x M i n 5, 0.5, 1, 0.2
CAOA [18] α η 5, 0.5
RSA [39] α β 0.1, 0.1
WOA [40]b, l, a, p1, [−1, 1], 2 to 0, [0,1]
GOA [41]l, f, c1.5, 0.5, [0,1]
PSO [42]W C 1 C 2 V m a x 0.9 to 0.4, 2, 2, 4
L-SHADE [43]H P b e s t r a t e A r c r a t e 5, 0.11, 1.4
Table 3. Numerical results of MCLAOA with other algorithms using 23 benchmark functions.
Table 3. Numerical results of MCLAOA with other algorithms using 23 benchmark functions.
FunctionCriteriaMCLAOAAOACAOARSAWOAGOAPSOLSHADE
  F 1   a v e 0.00E+006.42E-080.00E+000.00E+004.94E-3242.81E-068.77E-033.96E-22
  s t d 0.00E+006.66E-080.00E+000.00E+000.00E+001.88E-064.74E-039.97E-22
  b e s t 0.00E+001.93E-130.00E+000.00E+000.00E+007.02E-072.04E-032.67E-24
  F 2   a v e 0.00E+003.23E-050.00E+000.00E+003.91E-2226.14E+001.32E+006.53E-08
  s t d 0.00E+009.34E-050.00E+000.00E+000.00E+001.05E+018.00E-012.31E-07
  b e s t 0.00E+008.55E-170.00E+000.00E+001.61E-2344.51E-032.90E-014.49E-12
  F 3   a v e 0.00E+006.71E-060.00E+000.00E+002.85E+033.91E+029.53E-021.12E-01
  s t d 0.00E+001.44E-050.00E+000.00E+002.97E+031.27E+034.78E-021.29E-01
  b e s t 0.00E+006.75E-140.00E+000.00E+001.37E+021.81E+011.44E-026.81E-03
  F 4   a v e 0.00E+001.96E-030.00E+000.00E+002.48E+016.73E-015.61E-016.07E+00
  s t d 0.00E+003.63E-030.00E+000.00E+002.63E+016.27E-013.46E-011.82E+00
  b e s t 0.00E+001.13E-050.00E+000.00E+008.14E-041.47E-011.22E-012.61E+00
  F 5   a v e 2.88E+012.56E+012.74E+018.64E+002.58E+011.09E+024.23E+013.72E+01
  s t d 3.45E-013.87E-013.26E-011.34E+012.80E-012.40E+023.33E+012.32E+01
  b e s t 2.81E+012.45E+012.62E+015.85E-292.50E+011.88E+012.28E+011.26E+01
  F 6   a v e 9.67E+001.42E-079.92E-056.58E+002.87E-042.53E-067.66E-037.18E-22
  s t d 2.74E-013.68E-084.17E-057.59E-019.95E-051.80E-063.64E-032.02E-21
  b e s t 5.80E-017.47E-084.63E-055.06E+001.37E-046.00E-072.41E-033.60E-25
  F 7   a v e 9.55E-078.93E-069.93E-061.84E-056.32E-041.68E-026.69E-023.70E-02
  s t d 1.08E-068.50E-068.78E-061.76E-056.85E-045.28E-032.47E-021.55E-02
  b e s t 4.20E-092.90E-071.47E-077.54E-075.45E-058.52E-032.87E-021.18E-02
  F 8   a v e −1.02E+06−5.66E+03−8.16E+03−5.51E+03−1.22E+04−7.26E+03−4.23E+03−2.09E+04
  s t d 4.98E+054.31E+024.46E+021.61E+025.97E+026.82E+021.68E+038.66E+01
  b e s t −2.16E+06−6.51E+03−8.98E+03−5.74E+03−1.26E+04−8.40E+03−7.70E+03−2.09E+04
  F 9   a v e 0.00E+002.14E-090.00E+000.00E+000.00E+001.68E+024.00E+014.28E-08
  s t d 0.00E+001.12E-080.00e+000.00E+000.00E+005.62E+011.13E+012.05E-08
  b e s t 0.00E+000.00E+000.00E+000.00E+000.00E+006.17E+012.39E+014.65E-09
  F 10   a v e 8.88E-161.29E-058.88E-168.88E-164.44E-151.47E+003.55E+002.56E+00
  s t d 0.00E+002.49E-050.00e+000.00E+002.29E-159.42E-016.02E-015.37E-01
  b e s t 8.88E-161.51E-128.88E-168.88E-168.88E-161.97E-042.53E+001.56E+00
  F 11   a v e 0.00E+007.01E-077.53E-040.00E+009.44E-042.34E-021.08E-026.22E-03
  s t d 0.00E+002.31E-074.04E-030.00E+005.17E-031.81E-021.49E-021.34E-02
  b e s t 0.00E+004.02E-073.64E-060.00E+000.00E+001.05E-032.32E-041.11E-16
  F 12   a v e 5.29E-022.27E-011.27E-041.18E+009.80E-042.42E+001.45E-011.29E-01
  s t d 2.87E-022.60E-021.50E-043.66E-014.24E-031.44E+001.84E-012.59E-01
  b e s t 1.35E-021.84E-011.33E-055.04E-011.90E-051.29E-013.42E-051.85E-22
  F 13   a v e 2.47E+002.96E+002.85E+002.53E-016.79E-037.09E-035.67E-012.47E-01
  s t d 1.06E-011.70E-021.58E-017.40E-011.92E-029.50E-031.28E+007.92E-01
  b e s t 2.25E+002.91E+002.22E+005.29E-323.96E-049.16E-066.10E-044.97E-24
  F 14   a v e 7.05E+009.60E+007.69E+002.54E+001.13E+009.98E-011.26E+009.98E-01
  s t d 5.12E+004.79E+003.45E+001.92E+005.03E-013.44E-169.32E-010.00E+00
  b e s t 9.98E-019.98E-019.98E-011.01E+009.98E-019.98E-019.98E-019.98E-01
  F 15   a v e 5.68E-032.12E-037.14E-049.71E-045.00E-048.09E-031.05E-033.38E-04
  s t d 1.04E-024.95E-031.13E-033.41E-042.73E-041.26E-023.66E-031.67E-04
  b e s t 4.51E-043.08E-043.12E-044.68E-043.09E-043.08E-043.07E-043.07E-04
  F 16   a v e −1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
  s t d 1.13E-021.34E-127.85E-122.66E-041.83E-132.45E-156.78E-166.78E-16
  b e s t −1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
  F 17   a v e 3.98E-013.98E-013.98E-014.01E-013.98E-013.98E-013.98E-013.98E-01
  s t d 1.25E-091.54E-075.77E-076.34E-038.31E-093.66E-080.00E+000.00E+00
  b e s t 3.98E-013.98E-013.98E-013.98E-013.98E-013.98E-013.98E-013.98E-01
  F 18   a v e 3.00E+008.40E+001.11E+013.00E+003.00E+003.00E+003.00E+003.00E+00
  s t d 1.22E-141.65E+011.26E+013.96E-058.75E-073.17E-141.26E-151.68E-15
  b e s t 3.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+00
  F 19   a v e −3.86E+00−3.86E+00−3.86E+00−3.85E+00−3.86E+00−3.76E+00−3.86E+00−3.86E+00
  s t d 2.32E-153.40E-077.21E-041.38E-022.09E-032.39E-012.40E-032.71E-15
  b e s t −3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00
  F 20   a v e −3.27E+00−3.30E+00−3.28E+00−2.89E+00−3.23E+00−3.28E+00−3.24E+00−3.31E+00
  s t d 1.06E-014.51E-025.85E-022.51E-018.78E-025.83E-028.22E-023.63E-02
  b e s t −3.32E+00−3.32E+00−3.32E+00−3.16E+00−3.32E+00−3.32E+00−3.32E+00−3.32E+00
  F 21   a v e −10.1513−9.39E+00−9.98E+00−5.06E+00−9.98E+00−5.64E+00−6.65E+00−9.82E+00
  s t d 2.24E-032.01E+009.31E-013.03E-079.30E-013.37E+003.45E+001.28E+00
  b e s t −10.1532−10.1532−10.1532−5.06E+00−10.1532−10.1532−10.1532−10.1532
  F 22   a v e −10.4029−9.35E+00−9.97E+00−5.09E+00−9.83E+00−5.80E+00−6.93E+00−1.02E+01
  s t d 8.73E-162.42E+001.67E+002.89E-021.77E+003.64E+003.61E+001.22E+00
  b e s t −10.4029−10.4029−10.4029−5.25E+00−10.4029−10.4029−10.4029−10.4029
  F 23   a v e −10.5364−7.95E+00−1.03E+01−5.13E+00−1.05E+01−6.02E+00−7.09E+00−10.5364
  s t d 2.86E-153.73E+001.41E+001.40E-068.00E-054.03E+003.79E+001.75E-15
  b e s t −10.5364−10.5364−10.5364−5.13E+00−10.5364−10.5364−10.5364−10.5364
Friedman rank test rank2.10343.96553.31034.72414.27596.03456.34485.2414
13254786
Note: The best results of each function were marked in bold type in terms of ave.
Table 4. Statistical results over 23 benchmark functions.
Table 4. Statistical results over 23 benchmark functions.
Algorithm23 Benchmark Functions
+
AOA4163
CAOA599
RSA599
WOA6125
GOA4163
PSO4154
L-SHADE5135
Note: “+", “−", “≈" denote that the performance of the corresponding algorithm is statistically better than, worse than, and similar to that of MCLAOA, respectively.
Table 5. p-values of the Wilcoxon rank-sum test over 23 benchmark functions.
Table 5. p-values of the Wilcoxon rank-sum test over 23 benchmark functions.
FunctionMCLAOAAOACAOARSAWOAGOAPSOLSHADE
  F 1 N/A1.21E-12N/AN/A5.20E-061.21E-121.21E-121.21E-12
  F 2 N/A1.21E-12N/AN/A1.21E-121.21E-121.21E-121.21E-12
  F 3 N/A1.21E-12N/AN/A1.21E-121.21E-121.21E-121.21E-12
  F 4 N/A1.21E-12N/AN/A1.21E-121.21E-121.21E-121.21E-12
  F 5 2.65E-067.96E-037.96E-03N/A7.96E-031.67E-052.88E-061.61E-06
  F 6 3.02E-113.02E-113.02E-113.02E-113.02E-113.02E-113.02E-11N/A
  F 7 N/A1.60E-071.36E-073.47E-103.02E-113.02E-113.02E-113.02E-11
  F 8 N/A3.02E-113.02E-113.02E-113.02E-113.02E-113.02E-113.02E-11
  F 9 N/A1.10E-02N/AN/AN/A1.21E-121.21E-121.21E-12
  F 10 N/A1.21E-12N/AN/A9.84E-101.21E-121.21E-121.21E-12
  F 11 N/A1.21E-121.21E-12N/A0.33371.21E-121.21E-121.18E-12
  F 12 3.02E-113.02E-11N/A3.02E-110.31833.02E-118.15E-050.3790
  F 13 2.67E-095.57E-106.12E-106.49E-070.27075.19E-021.04E-04N/A
  F 14 1.20E-121.20E-121.20E-121.21E-121.21E-126.09E-136.58E-04N/A
  F 15 1.99E-103.00E-103.63E-103.30E-103.99E-107.71E-112.91E-02N/A
  F 16 6.28E-041.21E-121.21E-121.21E-121.19E-121.20E-12N/AN/A
  F 17 1.21E-121.21E-121.21E-121.21E-121.21E-126.15E-11N/AN/A
  F 18 5.55E-055.20E-125.20E-125.20E-125.20E-125.19E-12N/A2.35E-03
  F 19 N/A1.21E-121.21E-121.21E-121.21E-121.21E-126.12E-141.69E-14
  F 20 2.20E-037.61E-091.88E-094.10E-121.79E-101.41E-096.32E-05N/A
  F 21 N/A6.97E-034.51E-023.01E-113.99E-042.71E-020.66161.23E-09
  F 22 N/A7.80E-127.80E-127.80E-127.80E-127.80E-122.79E-029.39E-06
  F 23 N/A1.46E-111.46E-111.46E-111.46E-111.46E-115.74E-026.48E-06
Note: N/A represents the best algorithm in terms of optimization performance among all the algorithms for the corresponding function.
Table 6. Comparative results for the three-bar truss design problem.
Table 6. Comparative results for the three-bar truss design problem.
AlgorithmOptimal Values of Design VariablesOptimal Cost
  A 1   A 2
MCLAOA0.75480.2920187.4918
HHO [44]0.78870.4083263.8958
IAOA [51]0.78970.4045263.8537
CAOA [18]0.78410.4237263.9362
AOASC [20]0.78840.4081263.8523
OSAOA [52]1.000.00282.84
AOA [16]0.793690.39426263.9154
RSA [39]0.78870.40805263.8928
GOA [41]0.78890.4076263.8959
Table 7. Comparative results for the pressure vessel design problem.
Table 7. Comparative results for the pressure vessel design problem.
AlgorithmOptimal Values of Design VariablesOptimal Cost
  Ts   Th   R   L
MCLAOA1.1134067.2893103675.9636
IAOA [51]0.76370.370541.5666184.13525813.5505
CAOA [18]0.84160.413945.2890155.78185822.6083
AOASC [20]0.82540.426242.7605169.33966048.6812
OSAOA [52]0.81250.437542.0984176.65126060.0479
MPA [53]0.77820.384640.3196200.005885.3353
HHO [44]0.81760.407342.0917176.75876000.4626
RSA [39]0.84010.419043.3812161.55566034.7591
WOA [40]0.81250.437542.0983176.63906059.7410
AOA [16]0.83040.416242.7513169.34546048.7844
Table 8. Network training MAE for different numbers of hidden layer nodes.
Table 8. Network training MAE for different numbers of hidden layer nodes.
Nodes56789101112
MAE1.00E-031.01E-036.14E-042.46E-032.21E-031.88E-031.01E-031.51E-03
Note: The best results of each function were marked in bold type.
Table 9. Predictive results evaluation.
Table 9. Predictive results evaluation.
ModelsMAERMSEMAPE
BPNN4.31E-035.08E-032.3614
PSO-BPNN2.72E-032.82E-031.3876
SCA-BPNN1.49E-031.68E-030.7221
CAOA-BPNN1.85E-031.94E-030.9187
AOA-BPNN2.14E-032.44E-031.1838
MCLAOA-BPNN6.14E-046.57E-040.3076
Note: The best results of each function were marked in bold type.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, T.; Gao, Z.; Zhuang, Y. Fault Prediction of Control Clusters Based on an Improved Arithmetic Optimization Algorithm and BP Neural Network. Mathematics 2023, 11, 2891. https://doi.org/10.3390/math11132891

AMA Style

Xu T, Gao Z, Zhuang Y. Fault Prediction of Control Clusters Based on an Improved Arithmetic Optimization Algorithm and BP Neural Network. Mathematics. 2023; 11(13):2891. https://doi.org/10.3390/math11132891

Chicago/Turabian Style

Xu, Tao, Zeng Gao, and Yi Zhuang. 2023. "Fault Prediction of Control Clusters Based on an Improved Arithmetic Optimization Algorithm and BP Neural Network" Mathematics 11, no. 13: 2891. https://doi.org/10.3390/math11132891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop