Next Article in Journal
Solving the Coriolis Vibratory Gyroscope Motion Equations by Means of the Angular Rate B-Spline Approximation
Previous Article in Journal
Discrete Group Actions on Digital Objects and Fixed Point Sets by Isok(·)-Actions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM

1
School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China
2
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300401, China
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(3), 291; https://doi.org/10.3390/math9030291
Submission received: 31 December 2020 / Revised: 27 January 2021 / Accepted: 29 January 2021 / Published: 1 February 2021

Abstract

:
A novel semi-supervised learning method is proposed to better utilize labeled and unlabeled samples to improve classification performance. However, there is exists the limitation that Laplace regularization in a semi-supervised extreme learning machine (SSELM) tends to lead to poor generalization ability and it ignores the role of labeled information. To solve the above problems, a Joint Regularized Semi-Supervised Extreme Learning Machine (JRSSELM) is proposed, which uses Hessian regularization instead of Laplace regularization and adds supervised information regularization. In order to solve the problem of slow convergence speed and the easy to fall into local optimum of marine predator algorithm (MPA), a multi-strategy marine predator algorithm (MSMPA) is proposed, which first uses a chaotic opposition learning strategy to generate high-quality initial population, then uses adaptive inertia weights and adaptive step control factor to improve the exploration, utilization, and convergence speed, and then uses neighborhood dimensional learning strategy to maintain population diversity. The parameters in JRSSELM are then optimized using MSMPA. The MSMPA-JRSSELM is applied to logging oil formation identification. The experimental results show that MSMPA shows obvious superiority and strong competitiveness in terms of convergence accuracy and convergence speed. Also, the classification performance of MSMPA-JRSSELM is better than other classification methods, and the practical application is remarkable.

1. Introduction

Swarm intelligence algorithms [1] mainly simulate the behavior of a group of animals in search of food in a cooperative manner, where each member of the group learns from his or her own experience and the experience of the whole group, and changes the direction of the prey search accordingly [2]. Swarm intelligence algorithms can effectively solve many complex and challenging optimization problems in the field of artificial intelligence [3], and are mainly applied to combinatorial optimization [4], feature selection [5], image processing [6], data mining [7], and other fields.
In recent years, new meta-heuristic algorithms have been proposed to mimic the natural behavior of birds, bees, fish, and other groups of organisms, including Particle Swarm Optimization (PSO) [8], Gray Wolf Optimization (GWO) [9], Moth Flame Optimization (MFO) [10], Seagull Optimization Algorithm (SOA) [11], Sine Cosine Algorithm (SCA) [12], Whale Optimization Algorithm (WOA) [13], Coyote Optimization Algorithm (COA) [14], Carnivorous Plant Algorithm (CPA) [15], Transient Search Algorithm (TSA) [16], and more. In 2020, Faramarzi et al. [17] proposed novel meta-heuristic algorithms that mimic marine hunting behavior, and they believe that the Lévy flight strategy and the Brownian motion strategy that balance the movement of marine organisms can effectively solve the optimization problem. Each hunting process is divided into three stages. The first stage is the high-speed movement of the prey; the predator keeps a low-speed movement, and the second stage high-speed movement is carried out for both the prey and the predator. In the third stage, the predator moves much faster than the prey. According to the vortex formation and the fish gathering device effect, it can prevent premature convergence.
Since MPA exhibits significant superiority in solving optimization problems, it is now used to solve both continuous and engineering optimization problems, exhibiting good performance. Elaziz et al. [18] proposed a hybrid MPA and MFO algorithm (MPAMFO) to solve the problem of MPA’s weak ability to avoid local optimality, replacing the local search method of MPA with MFO, effectively improving the development capability of MPA, and using it for the multi-level threshold segmentation (MLT) of images with excellent performance. Abdel et al. [19] proposed an improved marine predator algorithm (IMPA) that introduces a ranking-based diversity reduction (RDR) strategy to directly replace the position of continuously poorly adapted search agents with the best individual position, improving the performance of the MPA and speeding up the convergence. Naga et al. [20] proposed a hybrid MPA and an adaptive differential evolution algorithm based on success history, which improves the global search and local search capabilities of finding the optimal solution, and is used to solve the optimal parameter combination in the single diode model. To address the limitations of the MPA’s exploratory and development capabilities, Ridha [21] combined MPA with Lambert W-functions and applied this method to solve the problem of optimizing parameter combinations in single-diode and dual-diode PV models. Yousri et al. [22] proposed an MPA-based robust photovoltaic array reconfiguration strategy for solar PV arrays that need to ensure both maximum power under weak lighting conditions and no damage under intense heat conditions. Soliman et al. [23] proposed a phototransistor (TDPV) model for the problem of photovoltaic losses in photovoltaic power plants, but it has nine parameters and cannot be solved directly. They proposed using MPA to solve the optimal parameter distribution, and finally a new high-precision TDPV model was obtained. For the optimal reactive power dispatch (ORPD) of ground efficiency due to highly stochastic wind speed and solar illumination, Ebeed et al. [24] propose an optimal combination of parameters to solve the ORPD by MPA to ensure that the system’s resource waste is minimized. For the deployment and training of convolutional neural network (CNN), it is challenging to perform feature extraction quickly. Sahlol et al. [25] proposed a feature extraction method combining CNN and swarm intelligence algorithm; the combined fractional order algorithm and ocean predator algorithm (FO- MPA) is used to improve the optimization performance and convergence speed of MPA. The proposed feature extraction method not only has good performance but also reduces the computational complexity.
The classification methods that have been widely used are Random Forests (RF) [26], Support Vector Machines (SVM) [27], and Extreme Learning Machines (ELM) [28]. To address the shortcomings of the traditional missing data filling methods, Deng et al. [29] proposed an improved Random Forest filling algorithm, which combines linear interpolation, matrix combination, and matrix transformation to solve the filling problem of the large amount of missing data of electricity. To utilize the minimum number of trees for classification, Paul et al. [30] proposed an improved random forest classifier based on the number of important and unimportant features, which iteratively removes some unimportant features. To improve the protein structure prediction performance, Kalaiselvi et al. [31] introduced an improved random forest classification (WPC-IRFC) technique based on weighted Pearson correlation, which has higher accuracy and shorter time. In response to the fact that SVMs do not adequately consider the distinction between numerical and nominal attributes, Peng et al. [32] proposed a novel SVM algorithm for heterogeneous data learning, which embeds nominal attributes into the real space by minimizing the estimated generalization error. To address the lack and contamination of supervised information, Dong et al. [33] proposed a robust semi-supervised classification using a novel correlated entropy loss function and Laplace SVM (LapSVM).
Huang et al. [34] proposed a semi-supervised extreme learning machine (SSELM) based on manifold regularization and extended ELM to semi-supervised learning. On various data sets, when auxiliary unlabeled data is available, SSELM is always better than purely supervised learning algorithms such as Support Vector Machines (SVM) and ELM. To address the strong sensitivity of the classification performance of SSELM to the quality of popular graphs, She et al. [35] proposed a regularized SSELM based on balanced graphs, combining label consistency graph (LCG) and sample similarity graph (SSG), and then optimizing the weight ratio of these two graphs to obtain an optimal neighborhood graph. To address the shortcoming that SSELM cannot mine information from nonlinear data, Zhou et al. [36] proposed a semi-supervised extreme learning machine (LRR-SSELM) based on a low-rank representation, which introduces a nonlinear classifier and a low-rank representation (LRR), and the LRR can maintain the popular structure of the original data. To enhance the feature extraction and classification performance of SSELM, She et al. [37] proposed a new hierarchical semi-supervised extremal learning machine (HSSELM) that uses the HELM method for automatic feature extraction of deep structures and then uses SSELM for classification tasks. To address the problem that manifold graph are only pre-constructed before classification and not changed later, which leads to poor model performance robustness, Ma et al. [38] proposed an adaptive safety semi-supervised extreme learning machine, which allows the model to adaptively compute the safety of unlabeled samples and adaptively construct manifold graphs. To address the problem that SSELM is sensitive to outliers in labeled samples, Pei et al. [39] proposed a new robust SSELM that uses a nonconvex squared loss function to impose a constant penalty on outliers and mitigate their possible negative effects.
SSELM is based on the ELM with the addition of Laplace regularization, and mitigates the drawback that sample imbalance tends to lead to poor generalization performance by assigning weights to different classes [34]. SSELM greatly expands the practicality of the ELM algorithm, and also retains all the advantages of ELM, such as high training efficiency and the simple implementation of multi-class classification problems. Laplace regularization [40] utilizes the L2 norm of the function gradient, so when solving classification or regression problems, its minimization makes the optimal function close to the constant function, further destroying the local topology and inferred power between samples. Hessian regularization [41] utilizes the L2 norm of the Hessian generic function of a function whose minimization will make the optimal function close to a linear function, capable of adaptively changing the geodesic function with distance, with a more effective preservation of the local topology between samples as well as the ability to characterize the intrinsic local geometric properties. Hessian regularization shows superior performance over Laplace regularization in predicting data points outside the region boundaries.
Therefore, in this paper, a novel semi-supervised learning method is proposed to extract the hidden information of large amount of unlabeled data and to obtain an efficient classification method with only a small number of labeled samples for training. Since SSELM takes a Laplace regularization with poor inferential ability and does not further exploit label information, resulting in its limited classification performance and poor generalization, the Joint Regularized Semi-Supervised Extreme Learning Machine (JRSSELM) is proposed. Using Hessian regularization with better inferencing power and better local prevalence structure, a supervised information regularization term that further exploits the information of tagged samples is introduced. Since JRSSELM and SSELM maintain consistency in parameter selection, with input weights and hidden layer thresholds chosen randomly and not adjusted subsequently, this, along with the selection of a grid of hyperparameters, can lead to limited performance and poor generalization. A Multi-Strategy Marine Predator Algorithm is proposed to address the limitations of MPA development and exploration capabilities, as well as the slow convergence rate. First, to address the problem that the traditional random initialization of populations is prone to generate low-quality initial populations, the initialization of populations is based on chaotic tent mapping and opposing learning strategies to ensure the generation of high-quality initial populations. Secondly, to balance the global extensive search in the early stage and the local fine search in the later stage, adaptive inertial weights and adaptive step control factors are adopted, which effectively enhance the exploration and exploitation capability of the algorithm. Furthermore, as the number of iterations increases, individuals in the population tend to lose their diversity and it is difficult to avoid the local optimum, so the proposed neighbor dimensional learning strategy ensures the diversity of the population in each iteration.
In order to verify the superiority of MSMPA, it is compared with 7 well-known algorithms (MPA, PSO, GWO, WOA, MFO, SOA, and SCA) on 18 classic benchmark test functions and CEC2017 competition test functions. The experimental results indicate that MSMPA shows significant competitiveness, enhances global search and local search capabilities, and accelerates the convergence speed. MSMPA is proposed to optimize the JRSSELM model with respect to the parameters mentioned in JRSSELM. JRSSELM and MSMPA-JRSSELM are applied to logging oil layer identification. The results show that JRSSELM and MSMPA-JRSSELM are significantly better than other classification method in terms of performance, especially MSMPA-JRSSELM has higher classification accuracy and stability compared to other models while ensuring not overfitting.
The rest of the paper is structured as follows. Section 2 describes the introduction of the ocean predator algorithm and its improvements. Section 3 presents and analyzes the experimental results of MSMPA. Section 4 describes the Semi-Supervised Extreme Learning Machine and its improvements. Section 5 presents and analyzes the experimental results for oil logging applications. Section 6 summarizes the work of this paper and provides an outlook for the future. This is followed by the Abbreviations section and the Appendix A section.

2. Marine Predator Algorithm and Its Improvement

2.1. Basic Marine Predator Algorithm

2.1.1. Population Location Initialization

MPA is a novel meta-heuristic algorithm that mimics marine predation by adopting a randomized localization rule for the initial population, with the following mathematical expression:
X i j = l b +   r ( u b l b )   i = 0 n ,   j = 0 d
where X i j represents the coordinates of the j-th dimension of the i-th population, n is the number of populations, d is the dimension, ub and lb are the upper and lower boundaries of the search space, respectively, and r is a random number between [0, 1].
Based on the location of the search agent, a matrix of prey can be constructed, as follows:
  Prey = [ X 1 , 1 X 1 , 2 X 1 , d X 2 , 1 X 2 , 2 X 2 , d X 3 , 1 X 3 , 2 X 3 , d X n , 1 X n , 2 X n , d ] n × d
Inspired by the law of the jungle of survival of the fittest, it is believed that the best hunters are gifted in predation, so the optimal individual is used as the ontology to replicate n − 1 identical predators, and these n hunters are formed into an elite matrix, as shown in the following equation:
  Elite = [ X 1 , 1 I X 1 , 2 I X 1 , d I X 2 , 1 I X 2 , 2 I X 2 , d I X n , 1 I X n , 2 I X n , d I ] n × d
where X I denotes the optimal individual vector. The search agent includes both predators and prey. As such, they are both searching for their food, and after each iteration, the elite position is updated based on adaptation.

2.1.2. Exploratory Phase of High-Speed Ratio

The standard Brownian motion [31] is a stochastic process for which the step is given by a probability function defined by the zero mean (μ = 0) and unit variance (σ2 = 1) of the normal (Gaussian) distribution. The control density function of the motion at point x is as follows:
f B ( x ; μ , σ ) = 1 2 π σ 2 exp ( ( x μ ) 2 2 σ 2 ) = 1 2 π exp ( x 2 2 )
At the beginning of an iteration, i.e., the period, where is the current iteration and the maximum number of iterations, the prey is frantically searching for food, while the predator adopts a no-movement strategy. Thus, this is the high-speed ratio case, where the prey movement trajectory proceeds as follows.
stepsize i = R B ( Elite i R B Prey i ) i = 1 , n Prey i = Prey i + P · R stepsize i
where R B is a vector of random numbers resulting from the normal distribution of Brownian motion, is an element-by-element multiplication, P is a step control factor of constant 0.5, and R is a vector of random numbers between [0, 1]. The multiplicative R B simulates the high-speed movement of prey.

2.1.3. Mid-Speed Ratio Transition Phase

In the middle stage,   1 3 Max _ Iter < Iter < 2 3 Max _ Iter , the predator takes the same rate of position update as the prey, and since exploration in the previous section has been going on for some time, this time transitions the behavior of half of the population to the exploitation stage. Obviously, both exploration and exploitation are equally important, and at this point, the prey is primarily responsible for exploitation, while the hunter is responsible for exploration. In this phase, then, the prey updates its position according to the Lévy movement, and the predator takes the Brownian movement.
Lévy flight is a special kind of random walking strategy in which the distribution of walking steps obeys a probability distribution of heavy-tailed features, called the Lévy distribution [42]. It can usually be approximated as a simple power function distribution L ( s ) ~ | s | 1 β , where 0 < β 2 , R L is the step length, and L ( s ) is the probability of a moving step s. In the algorithm of Mantegna et al. [43], the Lévy flight step can be defined as:
R L = u | v | 1 / β
where u and v are random numbers that are normally distributed, i.e., u N ( 0 , σ u 2 ) .
For the first half of the population, the position is updated by the following formula:
stepsize i = R L ( Elite i R L Prey i ) i = 1 , , n / 2 Prey i = Prey i + P · R stepsize i
where R L is a vector of random numbers generated by the Lévy flight step. It can be seen that the Lévy stride is very useful for development.
For the latter half of the population, the exploration behavior is still performed, and the simulated motion is updated by the following formula:
stepsize i = R B ( R B Elite i Prey i ) i = n / 2 , , n Prey i = Elite i + P · C F stepsize ¯ i w h e r e C F = ( 1 Iter Max _ lter ) ( 2 Iter Max Iter )
where CF is an adaptive parameter used to control the predator’s moving stride.

2.1.4. Low-Speed Ratio Development Phase

The later stages, i.e., Iter > 2 3 Max _ Iter , are also called the low ratio stages because they set the prey to move much slower than the hunter. The predator is eager to hunt prey, and the movement of the population is all exploitation, so the Lévy stride strategy is adopted and the position is updated as follows:
stepsize i = R L ( R L Elite i Prey i ) i = 1 , , n Prey i = Elite i + P · C F stepsize i

2.1.5. Eddy Current Formation and Fish Aggregation Device Effects (FADS)

In order to circumvent the local optimal solution, Faramazi et al., taking into account that other external environmental disturbances may affect the movement of the population to a greater or lesser extent, proposed the following mechanism of position updating based on the eddy current formation and the fish aggregating device effect (FADS), which is mathematically expressed as follows:
Prey i = { P r e y i + C F [ X min + R ( X max X min ) ] U   if   r F A D S P r e y i + [ F A D s ( 1 r ) + r ] ( P r e y r 1 P r e y r 2 )   if   r > F A D s
where FADS is specified to represent the probability of influencing the search process and is set equal to 0.2, X max is a vector consisting of the maximum value of the search boundary, X min is a vector consisting of the minimum value of the search boundary, and the subscripts r1 and r2 represent the random index of the predation matrix, which is a binary vector consisting of 0 and 1.
Because marine predators have a good memory, they are guaranteed to visit prey-rich areas after a successful hunt. Adaptation is calculated for the population after each iteration and the elite matrix is replaced if there is a better-adapted position, a process that ensures that the quality of the elite increases with the number of iterations.

2.2. Multi-Strategy Marine Predator Algorithm—MSMPA

2.2.1. Chaotic Opposition Learning Strategy

Chaotic mappings have been widely used in the optimization of intelligent algorithms due to their regularity, randomness, and traversability [44], but different chaotic mappings have a great influence on the chaotic optimization process [45]. The Tent mapping has good traversal uniformity and fast iterations and produces a uniform distribution of chaotic sequences between [0, 1]. The Tent mapping expression is as follows:
λ t + 1 = { λ t / α , λ t [ 0 , α ) ( 1 λ t ) / ( 1 α ) , λ t [ α , 1 ]   t = 0 , 1 , 2 , , T
where λ t is the number of chaos at the t-th iteration, T is the maximum number of iterations, and α is a customizable parameter between (0, 1). Note that when α = 0.5 , the system shows a short-period state, which is not suitable for population mapping, so α = 0.7 is chosen in this experiment.
The resulting chaotic variable λ is used for the initial population generation according to the search boundary [ l b , u b ] , as follows:
X i j = l b + λ j × ( u b l b )
where X i j is the j-th dimensional coordinate of the i-th search agent, and λ j is the coordinate of the j-th dimension of λ .
Even if the chaotic sequence is capable of generating a diverse and well-distributed population, it is undeniable that there may be better search agents on opposing sides of the search space, and the same number of opposing populations will be generated again, as follows:
X o p _ i = X max + X min X i
where X o p _ i is the opposing position of the i-th agent and X i is the position of the i-th individual.
Then, we merge them into a 2*n population, calculate the adaptation values for each individual, and rank them in order from smallest to largest, with the top n best-adapted individuals as the initial population and the best-adapted individuals as the parent of the Elite matrix.

2.2.2. Adaptive Inertial Weights and Step Control Factors

The inertia weights are inspired by the particle swarm algorithm, which plays a decisive role in the algorithm’s search ability and convergence speed. In the original MPA algorithm, the inertia weights are constant values, which constrain the algorithm’s global and local search capabilities [46].
For early iterations, large inertia weights enhance the global search capability; in later iterations, the algorithm needs to perform local fine search, so small inertia weights enhance the local search capability and speed up convergence [47].
In the MPA search process, the choice of inertia weighting strategy is crucial, and thus a strategy is proposed that adaptively changes the inertia weights according to the number of iterations. In the early iterations, the inertia weights are large and decrease rapidly, which is focused on global search; in the later iterations, the inertia weights are small and decrease slowly, which is to be more detailed in the local fine search [48]. The adaptive inertia weights are expressed as follows:
w = a   cos b ( ln ( 1 + e Iter Max _ Iter ) + c
where a , b, and c are optional parameters. After the experimental analysis, a , b, and c were set to 20, 12, and 0.2, respectively.
The adaptive inertia weight curves are shown in Figure 1.
From Figure 1, it can be seen that after the introduction of adaptive inertial weights, the position of the prey will be updated by the following equation.
Prey i = w Prey i + P · R stepsize i
Prey i = w Elite i + P · C F stepsize ¯ i
Also, the step length control is particularly important for MPA, in the early hope that the step length influence is as large as possible to make the prey better traversal search space; later, a small step length influence is needed to avoid falling into the local optimum, but also local fine search, while in the original MPA, the step length control factor is constant 0.5, which will limit the performance of the algorithm. Therefore, in this paper, an adaptive step length control factor is proposed, which not only ensures the global search demand of the early traversability but also enhances the local fine search capability at the later stage. The mathematical expression is as follows:
P = m   sin ( π 5 × ( p e n × Iter Max _ Iter ) ) + q
where m, n, p, and q are optional parameters. After experimental analysis, m, n, p, and q were set to be 1.2, 10, 2, and 0.2, respectively.
The curve of the adaptive step control factor is shown in Figure 2.
From Figure 2, it can be seen that the early values are large and rapidly decreasing, which enhances the influence of step size, i.e., strengthening the traversability of the global search, and the values are small and slowly decreasing in order to take into account the later local fine search.

2.2.3. Neighborhood Dimensional Learning Strategy (NDL)

The population diversity plays a key role in the convergence speed and accuracy of the algorithm. The population diversity of the original MPA decreases gradually with the iteration of the algorithm, which tends to lead to the local optimum and not the global optimum when solving high-dimensional complex problems [49]. Therefore, in order to ensure that the population diversity remains rich in each iteration, a Neighborhood Dimensional Learning strategy is proposed.
First, at the end of each iteration of the original MPA, a candidate population of the same size is generated for the original population, and it is taken to relocate the new population using either optimal individual information or information about itself, as follows:
X C A N D _ i ( t ) = { w X * ( t ) + 2 × ( r 0.5 ) × ( u b l b r + l b )   p > 0.5 w X i ( t ) + 2 × ( r 0.5 ) × ( u b l b r + l b ) p < 0.5
where X i ( t ) is the position vector of the i-th search agent, X C A N D _ i ( t + 1 ) is the position vector of the i-th candidate individuals, and X * ( t ) is the position vector of the elite individuals, and p is the random probability.
Second, based on the Euclidean distance between the current position X i ( t ) and the candidate position X C A N D ( t + 1 ) , a neighborhood radius is computed by the following equation:
R i ( t ) = | | X i ( t ) X C A N D _ i ( t + 1 ) | |
Then, according to R i ( t ) , the Euclidean distance which is less than the radius search agent is successively selected from the population, and these individuals are saved as neighbors of the i-th individual. The mathematical expression is as follows:
N i ( t ) = { X j ( t ) D i ( X i ( t ) , X j ( t ) ) R i ( t ) , X j ( t ) population }
where N i ( t ) denotes the set of individual neighbors and D i denotes performing a European distance operation.
The next step in neighborhood dimension learning is to update the dimensional coordinates of the current individual using some dimensional information from the population of neighbors and the dimensional information of an individual randomly selected from the entire population, as expressed mathematically as follows:
X N D L _ i , d ( t + 1 ) = X i , d ( t ) + sign ( r 0.5 ) × ( X n , d ( t ) X r , d ( t ) )
where X i , d ( t ) represents the d-dimensional information of the i-th search agent, X N D L _ i , d ( t + 1 ) represents the new d-dimensional information after passing the NDL, X n , d ( t ) represents the d-dimensional information of the neighboring individuals, and X r , d ( t ) represents the d-dimensional information of the randomly selected individuals.
Finally, the fitness of the candidate population and the NDL population is calculated to select the newer individuals, as shown by the following mathematical expression.
X i ( t + 1 ) = { X C A N D _ i ( t + 1 )   if   f ( X C A N D _ i ( t + 1 ) ) < f ( X N D L _ i ( t + 1 ) ) X N D L _ i ( t + 1 )   if   f ( X C A N D _ i ( t + 1 ) ) < f ( X N D L _ i ( t + 1 ) )
The pseudo-code of the proposed MSMPA is shown in Algorithm 1.
Algorithm 1: Pseudo-code of the MSMPA
Input: Number of Search Agents: N, Dim, Max_Iter
Output: The optimum fitness value
Generate chaotic tent mapping sequences by the Equation (11)
Initialized populations by the Equation (12)
Generation of Opposing Populations by the Equation (13)
Selection of the first N well-adapted individuals as the first generation of the population
While Iter < Max_Iter
Calculating Adaptive Inertia Weights by the Equation (14)
Calculating Adaptive Step Control Factors by the Equation (17)
Calculate fitness values and construct an elite matrix
If Iter < Max_Iter/3
  Update prey by the Equation (15)
Else if Max_Iter/3 < Iter < 2∗Max_Iter/3
  For the first half of the populations (i = 1, …, n/2)
  Update prey by the Equation (15)
  For the other half of the populations (i = n/2, …, n)
  Update prey by the Equation (16)
Else if Iter > 2∗Max_Iter/3
Update prey by the Equation (16)
End If
Generation of candidate populations by the Equation (18)
Calculating Neighborhood Radius by the Equation (19)
Finding Neighborhood Populations by the Equation (20)
Calculation of NDL populations by the Equation (21)
Calculate fitness values to update population position by the Equation (22)
Updating Memory and Applying FADs effect and update by the Equation (10)
Calculate the fitness value of the population by the fitness function
Update the current optimum fitness value and the position of the best Search Agent
End while
Return the optimum fitness value

3. Simulation Experiments and Comparative Analysis

3.1. Experimental Environment and Algorithm Parameters

In this section, in order to verify the superiority of the proposed MSMPA in solving the large-scale optimization problem, it is shown that the proposed MSMPA can be used to solve the large-scale optimization problem. Eighteen classical benchmarking functions [50] and 30 optimization functions from the CEC2017 competition [51] were selected and tested against seven popular algorithms (MPA [17], PSO [8], GWO [9], WOA [13], MFO [10], SOA [11], and SCA [12]). In order to ensure the fairness of the comparison between evolutionary and swarm intelligence algorithms, the number of iterations is chosen to be replaced by the number of function evaluations in this paper. The population size of the experiment is 30, the maximal number of function evaluation is 30,000 with 30 independent runs on each function. The experimental environment is Windows 10 64 bit, MATLAB R2016A, Intel(R) Core CPU (i5-10210U 2.1GHz), 16G RAM. To ensure the fairness of the comparison experiments, the parameter settings in the original literature are maintained in this paper for each competing algorithm. The specific parameter settings are shown in Table 1.

3.2. Benchmark Test Functions

The details of the 18 benchmark functions are shown in Table 2. Table 2 includes seven high-dimensional single-peak functions (F1–F7), six high-dimensional multi-peak functions (F8–F13), and five fixed-dimensional multi-peak functions (F14–F18). Since the high-dimensional single-peak function has only one peak, it is used to test the local convergence and convergence speed of the algorithm; the high-dimensional multipeak function and the fixed-dimensional multipeak function have multiple peaks and only one global optimum, so they are used to test the algorithm’s global optimum search and the ability to jump out of the local optimum.
To further test the performance of the algorithm, the CEC2017 Numerical Optimization Competition Suite functions were also selected, including 3 single-peak functions, 7 multi-peak functions, 10 mixed functions, and 10 composite functions. CEC2017 Numerical Optimization functions are shown in Table 3. The dimensionality of all functions is set to 10.

3.3. Experimental Results and Analysis

The convergence curve is a visual representation of the ability to evaluate the development and speed of convergence of the algorithm to find the best, and the convergence curves of MSMPA and seven other comparison algorithms on 18 benchmark functions are shown in Figure A1 in Appendix A. From Figure A1, we can see that the convergence speed of MSMPA on F1 to F4 is obviously faster than other algorithms and the end of the convergence curve is also significantly lower than other algorithms; the fastest convergence on F5 to F8, F12 and F13 is not the fastest, but the convergence accuracy of other algorithms is significantly worse than MSMPA; on F9 to F11, the convergence speed and convergence accuracy of MSMPA show significant superiority. MSMPA converges faster than the other algorithms on solid-dimensional multimodal functions, except on F15, where the convergence accuracy is poorer than that of MPA, and MSMPA exhibits significant competitiveness.
In this experiment, the average (Ave), standard deviation (Std), maximum (Max), and minimum (Min) are evaluated. The Friedman’s test [52], a popular statistical test for non-parametric tests, is also used to make it easier to detect differences in the performance of individual algorithms, to compare the average performance of each method across all experimental results, and to place the results at the end of the statistical table of experimental results.
The test results of the 18 benchmark functions are shown in Table A1 in Appendix A, where the bold data are the optimal values of all the methods in the same function performance index. From Table A1, it can be seen that MSMPA performs optimally on the high-dimensional single-peak functions, especially on F1~F4, where MSMPA achieves the theoretical best value. This demonstrates the superiority of MSMPA in local search capability. In the high-dimensional multimode function, MSMPA shows obvious superiority, except for the standard deviation on F8, where SCA is the best performer, MSMPA is the best performer in other performance indexes, especially on F9 and F11, where MSMPA achieves the theoretical global optimum. On the fixed-dimensional multimode functions, except for F15, which is second to MPA, MSMPA is the best performer on all other functions, especially on F16–F18, where MSMPA is optimized to the theoretical extremes and its standard deviation is zero. MSMPA is the first place in the average rank of all functions ranked by Friedman’s test.
The experimental results of CEC2017 are shown in Table A2 in Appendix A, in which the bold font in the table is the optimal value of the current function under the same evaluation index. From Table A2, it can be concluded that, in addition to the non-optimal performance on F16, F21, F22, F24, and F25, the best performance is achieved on the other functions, so the overall performance is optimal, and it is noteworthy that the theoretical extremes were achieved on CF2. The average ranking of MSMPA in the Friedman test is 1.6167, which is still the best performance, followed by MPA.

3.4. Algorithmic Stability Analysis

In order to show the stability of MSMPA and other algorithms more visually, this experiment uses a box line diagram to show the distribution of the results of each function after 30 independent tests. The numerical distribution of some of the selected functions is selected from the test functions, and the test result box diagram of the selected functions is shown in Figure A2 in Appendix A.
From Figure A2, it can be seen that MSMPA exhibits remarkable stability and superior performance over the other algorithms for all functions, as shown by the fact that the lower edge, upper edge, and median are always lower than the other algorithms for the same function.

3.5. Wilcoxon Rank Sum Test Analysis

Since there are too many random factors affecting the performance of the algorithm, a statistical test is used to compare the difference in performance between MSMPA and other algorithms. The widely used Wilcoxon’s rank sum test [53] was selected, and 5% was set as the index of significance, and p-values were obtained from the two-rank sum analysis of MSMPA and other algorithms. If the p-value is greater than 5% or NaN, it means that MSMPA is not statistically significantly different on this function. Wilcoxon’s rank-sum tests for MSMPA and other algorithms yielded the p-values shown in Table A3 in Appendix A, where the bold text in the table indicates values greater than 5% or NaN.
From Table A3, it can be concluded that on F9, the MSMPA, MPA, and WOA tests show results for NaN because on F9, all three algorithms take the theoretical optimum; both MSMPA and MPA took the theoretical optimum on F11 and CF2, with no statistically significant differences between MSMPA and GWO and WOA on F11; MSMPA was not statistically significantly different from MPA only on F14; No statistically significant differences between MSMPA and MFO on F16, F17, CF3, CF9, and CF22; No statistically significant differences between MSMPA and PSO on CF2, CF11, CF20, CF23, and CF28; MSMPA was statistically significantly different from SOA only on CF21. In general, MSMPA is statistically significantly different from other algorithms in most functions, demonstrating its superior ability to find the global optimum and jump out of the local optimum.

3.6. High-Dimensional Functional Test Analysis

MPA performs poorly when dealing with complex high-dimensional problems and tends to fall into local optimality. In order to verify the better performance of the proposed MSMPA in global search and local optimization avoidance, the dimensions of F1–F13 are set to 100, 200, and 500, respectively. It is stills run independently with the other seven algorithms at the population size of 30 and the maximum number of iterations of 1000 for 30 times, and the results are counted and used in the Friedman test. The test results for each high-dimensional function with dimensions of 100, 200, and 500 are shown in Table A4, Table A5 and Table A6 in Appendix A, respectively.
It can be seen from Table A4, Table A5 and Table A6 that, apart from the second better performance of MSMPA on F2 with a dimension of 500 than WOA, MSMPA is significantly more competitive than other algorithms in dealing with complex and high-dimensional problems. The final result of its Friedman test for the mean is ranked first. It is worth noting that, on F1~F4, F9 and F11, MSMPA achieved the theoretical optimal value. These just prove the stability and effectiveness of MSMPA in solving complex high-dimensional problems.

4. Semi-Supervised Extreme Learning Machines and Its Improvements

4.1. Semi-Supervised Extreme Learning Machine

The mathematical expression for the objective function of SSELM is defined by introducing a Laplacian regularization term and using information from unlabeled samples as follows:
min β n h × n o 1 2 β 2 + 1 2 i = 1 l C i e i 2 + λ 2 T r ( F T L F ) s . t . h ( x i ) β = y i T e i T , i = 1 , , l f i = h ( x i ) β , i = 1 , , l + u
The solution to SSELM can be obtained with the following expression:
β = ( I n h + H T C H + λ H T L H ) 1 H T C Y ˜
where I n h is the unit matrix of nh.
If the number of crypto neurons is greater than the number of labeled samples, then the following solution can be adopted, with the following expression:
β * = H T ( I l + u + C H H T + λ L H H T ) 1 C Y ˜
where I l + u is the unit matrix of l + u .

4.2. Joint Hessian and Supervised Information Regularization Semi-Supervised Extreme Learning Machine

4.2.1. Hessian Regularization

The Hessian regularization term [41] provides a simple way to establish the relationship between mappings and manifolds, which is derived from the Eells energy. The Hessian energy can also be estimated by applying a simplified form of the derivative of the second order of regular coordinates. Let N k ( X i ) be the set of k hypotenuse samples at sample point Xi, and estimate the Hessian of f at Xi by computing H. The Hessian can be approximated as follows:
2 ( X T U ) x r x s x i j = 1 k H r s j ( i ) f ( X j )
The above equation can be solved by using linear least squares for a second-order polynomial. The ideal form of H r s j ( i ) . Hessian’s estimate of the Frobenius paradigm is thus obtained:
a b f 2 r , s = 1 m ( α = 1 m H r s α ( i ) f α ) 2 = α , β = 1 k f α f β B α β ( i )
where B α β ( i ) = r , s = 1 m H r s α ( i ) H r s β ( i ) is the Hessian energy to complete all the estimates.
Given a European clustering expression for Hessian at point Xi:
H e s s ( f ) = i = 1 n i = 1 n ( 2 f x r x s X i ) 2 = i = 1 n N k ( X i ) β N k ( X i ) f α f β B α β ( i )
where matrix B is the sum of all data points, and n represents the number of all labeled and unlabeled data.

4.2.2. Supervisory Information Regularization

Assuming that the labels of Xi and Xj are the same, then, assuming that the coefficients L s _ i j = 1 at this time, the difference between their decision functions [ f ( X i ) f ( X j ) ] is also small, then the optimization problem can be defined as ( f ( x i ) f ( x j ) ) 2 L s _ ij with the following expression for the elements of the coefficient matrix LS:
L s _ i j = { 1 L s , X i   and   X j are   in   the   same   category 0 , otherwise
where Ls is the number of combinations of all labeled consistent samples, others include the case where one of the comparative combinations exists with or without labeled samples, and i,j = 1, 2, ..., l, ..., l + u.
Assuming that the labels of Xi and Xj are different at the same time, then the product of the decision function (f(Xi)*f(Xj)) is small, assuming that the coefficients L d _ i j = 1 . The optimization problem can be defined as f ( x i ) f ( x j ) L d _ i j , with the following expression for the elements of the coefficient matrix LD:
L d _ i j = { 1 L d , X i   and   X j are   in   the   different   category 0 , otherwise
where Ld is the number of combinations of all label inconsistent samples, and the others include the cases where one of the comparative combinations has an unlabeled sample.
The mathematical expression defining the regularization term of the supervision constraint is as follows:
S = i = 1 l + u j = 1 l + u ( f ( X i ) f ( X j ) ) 2 L s _ i j + i = 1 l + u j = 1 l + u f ( X i ) f ( X j ) L d _ i j = 2 F T ( L S _ D L S ) F + 2 F T ( 0.5 × L D ) F = 2 F T ( L S _ D L S + 0.5 × L D ) F
where LS _D is the diagonal matrix of the matrix LS.

4.2.3. Joint Regularization SSELM

In order to make full use of the direct associations of various categories contained in the label information, a regularization term of supervision information is introduced to enhance SSELM. Then, the Laplacian regularization in the original SSELM is replaced with the joint Hessian regularization and supervision information regularization term, hence, JRSSELM is proposed.
Assume that the training set consists of L labeled data and U unlabeled data, represented by { X l , Y l } = { X i , Y i } i = 1 L and X u = { X i } i = 1 U , respectively. Among the L labeled data, there are Ls combinations with the same label, and Ld combinations with different labels. The Hessian regularization and supervision information regularization terms are introduced to replace the original Laplace regularization terms, and the objective function optimized by JRSSELM can be obtained. The mathematical expression is defined as follows:
min β R n h × n o 1 2 β 2 + 1 2 i = 1 l c i e i 2 + λ 1 2 F T H e s s F + λ 2 2 S   s . t .   h ( x i ) β = y i T e i T ,   i = 1 , , l f i = h ( x i ) β ,   i = 1 , , u + l
where λ 1 and λ 2 are tradeoff parameters.
Substituting the constraints into the objective function, a new optimized objective function can be obtained, and its expression is as follows:
min β n , x , n o 1 2 β 2 + 1 2 C 1 2 ( Y ˜ H β ) 2 + λ 1 2 T r ( β T H T H ess H β ) + λ 2 2 T r [ β T H T ( L S _ D L S + 0.5 × L D ) H β ]
where Y ˜ ( l + u ) × n o represents the training target to be reinforced, C is a ( l + u ) × ( l + u ) -dimensional matrix whose first l diagonal element is [ C ] i i = C i , and all the other elements are zero.
Then the derivative of the objective function to β is expressed as follows:
L J R S S E L M = β + H T C ( Y ˜ H β ) + λ 1 · H T H ess H β + λ 2 · H T ( L S _ D L S + 0.5 × L D ) H β
If the derivative value is set to zero, then the JRSSELM solution can be obtained, and its expression is as follows:
β = [ I n h + H T C H + λ 1 H T H ess H + λ 2 H T ( L S _ D L S + 0.5 × L D ) H ] 1 H T C Y ˜
If the number of hidden layer neurons is greater than the number of labeled samples, the following solutions can be adopted, and the expression is as follows:
β = H T [ I l + u + C H H T + λ 1 H e s s H H T + λ 2 ( L S _ D L S + 0.5 × L D ) H H T ] 1 C Y ˜
where I l + u is the unit matrix of l + u .

4.3. Hybrid MSMPA and Joint Regularized Semi-Supervised Extreme Learning Machine

The input weights and hidden layer deviations in the original SSELM are generated by randomization and will not be adjusted in the future. The grid selection of its hyperparameters often leads to limited performance and poor stability of the model. The values of hyperparameters are all {10−5, 10−4, …, 104, 105}, which can be calculated by the following mathematical expressions:
C 0 = 10 6 i
λ = 10 6 j
where i and j are integers between [0, 11].
JRSSELM and SSELM maintain consistency in parameter settings. The input weights and hidden layer deviations are also randomly generated and will not be updated in the future. The hyperparameters are also selected for gridding, and the value range remains the same as in SSELM. Its mathematical expression is as follows:
λ 1 = 10 6 l
λ 2 = 10 6 m
where l and m are integers between [0, 11].
MSMPA is proposed to optimize the selection of various parameters in JRSSELM. To further improve the classification performance and robustness of JRSSELM, first, input weight w, hidden layer deviation b, C 0 , λ 1 , and λ 2 as the coordinates of the search agent. If the number of hidden layers at this time is hn, and the number of neurons in the input layer is hi, then each prey dimension is [ h n ( h i + 1 ) + 3 ] . It is worth noting that the first [ h n ( h i + 1 ) + 3 ] -dimensional coordinates represent the input weight and hidden layer threshold, so its value range is [−1, 1], and the last three dimensions represent the position information of C 0 , λ 1 and λ 2 , respectively. The range is set to [−5, 5]. To be consistent with the value range of JRSSELM, the calculation can be performed as follows:
C 0 = 10 X i M 2
λ 1 = 10 X i M 1
λ 2 = 10 X i M
where M = [ h n ( h i + 1 ) + 3 ] is the total dimension and X i M represents the M-dimensional coordinates of the i-th search agents.
Secondly, the selection of the fitness function is particularly important. In order to make a fair comparison with SSELM and JRSSELM, both SSELM and JRSSELM select the optimal hyperparameter combination by calculating the prediction error of the test set. Therefore, the fitness function is also selected as the test set prediction error in MSMPA, and its mathematical expression is as follows:
f i t n e s s   =   1 T P + T N T P + T N + F P + F N
where TP stands for true positive, TN stands for true negative, FP stands for false positive, and FN stands for false negative.
The pseudo code for the MSMPA and JRSSELM mix is shown in Algorithm 2.
Algorithm 2. Pseudo-code of the MSMPA-JRSSELM
Input: L labeled samples { X l , Y l } = { X i , Y i } i = 1 L
  U unlabeled samples, X u = { X i } i = 1 U
  Number of Search Agents: N, Dim, Max_Iter
Output: The best mapping function of JRSSELM:f: n i n o
Step 1: Constructing Hessian regularization terms and supervisory information regularization terms via X l and X u .
Step 2: Generate chaotic tent mapping sequences by the Equation (11)
  Initialized populations by the Equation (12)
  Generation of Opposing Populations by the Equation (13)
  Selection of the first N well-adapted individuals as the first generation of the population
Step 3:
  If h n N
  Calculate the output weights β by Equation (35)
  Else if
  Calculate the output weights β by Equation (36)
Step 4:
  While Iter < Max_Iter
  Update all population positions by MSMPA
  Repeat step 3
  Calculate the fitness of each search agent by the Equation (44)
  End while
Step 5: Outputs the best search agent position and optimal value
Step 6: Optimal mapping functions of JLSSELM: f * ( x ) = h ( x ) β

5. Oil Logging Oil Layer Identification Applications

5.1. Design of Oil Layer Identification System

In oil logging, the oil layer identification technology is a very complex dynamic research technology, and it is also a very critical element. Many factors affect the oil layer distribution, such as reservoir thickness, oil pressure, permeability, water content, reservoir pressure, storage effective thickness, and so on. The most important thing is the accuracy of the identification, which can assist the oil layer exploration and provide effective information for the engineers to make the decision. The MSMPA-JRSSELM proposed in this paper is highly accurate in classification and can make accurate predictions of oil layer distribution on a test set using a small number of marker samples and a large number of unmarked samples. Therefore, MSMPA-JRSSELM is applied to oil logging to verify the effectiveness of this algorithm by using oil data provided by a Chinese oil field. The block diagram of the oil layer identification system is shown in Figure 3.
From Figure 3, it can be seen that there are several major steps in oil layer identification.
Phase 1 (Data set delineation and pre-processing). The selection of the dataset should be complete and comprehensive and should be closely related to the oil layer evaluation. The data set is divided into two parts: training sample and test sample, in which a small portion of samples from the training set are selected as labeled samples.
Phase 2 (Attribute discretization and generalization). In order to approximate the attributes of the sample information, the decision attributes D = { d }, d = { d i = i , i = 1 , 1 } , where −1 and 1 represent the dry layer and the oil layer, respectively. For the discretization of the conditional attributes, in order to conform to the actual geophysical characteristics, the continuous attributes are discretized by the curvilinear inflection point method [54], in which each attribute is discretized separately in turn, i.e., first, the attribute values are arranged from small to large to find the possible inflection point locations, and then the appropriate discretization points are filtered out according to the target layer range constraints.
Phase 3 (Attribute reduction of data information). Since the degree of decisiveness of each conditional attribute on the oil layer distribution is not consistent, there are more than ten conditional attributes in the logging data, but only a few conditional attributes play a decisive role, so it is necessary to eliminate other redundant attributes to avoid algorithm redundancy. In this paper, we adopt a Rough Set based on consistent coverage for attribute reduction [55].
Phase 4 (Training the MSMPA-JRSSELM classifier). In the MSMPA-JRSSELM model, the sample information is input after attribute reduction, and training is carried out using the JRSSELM model. The MSMPA is used to find better training parameters to improve the model recognition accuracy until the iterations are completed, and the optimal MSMPA-JRSSELM classification model is obtained.
Phase 5 (Test Set Prediction Task). The trained MSMPA-JRSSELM model is used to predict the formation for the entire well oil layer of the test set and output the results. To validate the validity and stability of the model, we selected data from two wells for the experimental comparative analysis.

5.2. Practical Applications

In order to verify the application effect of the semi-supervised model optimized by the improved algorithm, three logging data were selected from the database for training and testing, and they were recorded as well 1 and well 2. The data set division of these two wells is shown in Table 4. It can be seen from Table 4 that the oil layer and dry layer distribution range in the training set and test set. In addition, the attribute reduction results of the two wells are shown in Table 5. It can be seen from Table 5 that there are many redundant conditional attributes in the original data. After Rough Set attribute reduction, important attributes can be selected, which greatly simplifies the complexity of the algorithm. The value range of attributes after attribute reduction is shown in Table 6.
From Table 6, we can see the range of values for each of the well 1 and well 2 properties, where GR stands for natural gamma, DT stands for acoustic time difference, SP stands for natural potential, LLD stands for deep lateral resistivity, LLS stands for shallow lateral resistivity, DEN stands for compensation density, and K stands for potassium. AC stands for acoustic time difference, RT stands for in situ ground resistivity, and RXO stands for flush zone resistivity. For the simplified conditional attributes, since the units of each attribute are different and the value ranges are different, the data should be normalized first, so that the range of the sample data is between [0, 1], and then the normalized influencing factor data should be substituted into the network for training and testing to obtain the results. The formula for normalizing the sample is as follows:
x ¯ = 2 × ( x x min ) x max x min 1
where x [ x min , x max ] , x min is the minimum value of the data sample attribute, and x max is the maximum value of the data sample attribute.
After normalizing the reduced attributes, the logging curve is shown in Figure 4, where the horizontal axis represents the depth and the vertical axis represents the normalized value. It can be seen from Figure 4 that the logging curves of each condition attribute are completely different, and effective information cannot be obtained directly from the logging curves.
To evaluate the performance of the recognition model, in addition to the test set prediction accuracy, we defined the following performance metrics:
RMSE = 1 m i = 1 m ( f ( x ) y ) 2
MAE = 1 m i = 1 m | f ( x ) y |
where f ( x ) and y are the predicted and desired output values, respectively.
The RMSE is a measure of the deviation between the predicted value and the true value, and the MAE is the average of the absolute errors. The smaller the RMSE and MAE, the better the performance of the algorithmic model. Therefore, RMSE is used as a criterion for evaluating the accuracy of each algorithmic model, and MAE is often used as the actual prediction and prediction error because it better reflects the actual error of the predicted value.
In order to comprehensively test the superiority of the proposed JRSSELM and MSMPA-JRSSELM models on semi-supervised learning, we divided the labeled samples into two categories of 10% and 20% in the training set and conducted comparison experiments with the supervised learning model ELM and other semi-supervised learning models, including LapSVM [56], SSELM, and MPA-JRSSELM. They were run independently for 30 times, the number of hidden layer neurons was 100, and the maximum number of iterations was 100. Then, the average of the individual performance metrics was calculated. The classification performance of each model on well 1 is shown in Table 7. It should be noted that although the number of labeled samples has different ratios, the training set is invariant for ELM, so its ACC, MAE, and RMSE are consistent across the labeled samples.
From Table 7, it can be seen that LapSVM performs the worst, with the proposed JRSSELM improving its classification accuracy by nearly 3% over SSELM. With the increase in the number of labeled samples in the training set, the MSMPA-JRSSELM performance was further improved after MSMPA optimized selection parameters, with nearly 5% higher than SSELM and 2% higher than JRSSELM, and with the lowest MAE and RMSE, reflecting the significant competitive advantage of MSMPA-JRSSELM on well 1.
The classification performance of each model on Well 2 is shown in Table 8. It can be seen from Table 8 that LapSVM performs the worst, and the classification performance of SSELM is inferior to ELM, while the classification accuracy of JRSSELM proposed by us is slightly higher than that of ELM, and it is more noteworthy that MSMPA-JRSSELM improves the classification accuracy by nearly 8% compared to SSELM, reaching about 98% classification accuracy, and its MAE and RMSE are also the lowest values, which is helpful for well discrimination decision making. It has engineering applications, which reflect the obvious advantages of the proposed classifier for oil layer identification in oil logging.
In order to more intuitively observe the original oil test conclusions and predicted oil layer distribution of the test set, the original and predicted oil layer distributions of the two wells are shown in Figure 5. It can be seen from Figure 5 that for Well 1, as the proportion of labeled samples increases, the oil layer prediction is significantly more accurate, and the oil layer distribution position is basically consistent with the oil test conclusion. For Well 2, whether it is a training model with 10% or 20% of the training set of labeled samples, its predictions are almost consistent with the oil test conclusions, which has application significance for auxiliary oil logging.

6. Conclusions

In this paper, in order to better utilize only a small number of labeled samples and a large number of unlabeled samples to obtain better classification performance, a novel semi-supervised classification model, namely MSMPA-JRSSELM, is proposed.
First, the MSMPA is designed to address the shortcomings of the original MPA in solving global optimization problems such as slow convergence and poor ability to avoid local optima. There are 3 efficient strategies introduced in MPA. A chaotic opposition learning strategy is used to ensure high quality initial populations, adaptive inertial weights, and adaptive step control factors are adopted to enhance early global exploration and later fine-grained exploitation behavior and speed up convergence, and a neighbor dimensional learning strategy is proposed to ensure population diversity in each iteration. Among the 18 classical benchmark functions and 30 CEC2017 competition functions, MSMPA exhibits significant superiority over other algorithms, especially in solving high-dimensional complex problems, MSMPA exhibits strong global search and the ability to avoid local optimality.
Secondly, since SSELM uses Laplace regularization with weaker inferential power, Hessian regularization with stronger extrapolation power and the ability to maintain the manifold structure is used instead of Laplace regularization. Third, in response to the SSELM’s inability to fully utilize the valid information embedded in the labeled samples, a supervised regularization term that assigns new coefficient weights to the given labeled information is proposed. Hessian regularization and supervised regularization are added to ELM to propose JRSSELM. finally, to further improve the classification performance of JRSSELM, MSMPA is proposed to optimize the selection of input weights, implied thresholds and hyperparameters in JRSSELM. To verify the effectiveness of JRSSELM and MSMPA-JRSSELM, they are applied to oil layer identification in logging. The experimental results show that JRSSELM and MSMPA-JRSSELM outperform SSELM and other popular classification methods in ACC, MAE and RMSE, especially MSMPA-JRSSELM shows the most excellent classification performance.
Further research and applications of MPA and SSELM are warranted, considering the combination of MPA with other meta-heuristics, the introduction of cost-sensitive learning in SSELM, and the application of MPA and SSELM to other complex engineering problems. We consider combining SSELM with deep learning, adding self-encoder, convolution, down sampling, and other deep learning methods, which can realize automatic feature extraction and finally apply to regression problems such as short-term power load forecasting, meteorological load forecasting and wind power interval forecasting.

Author Contributions

W.Y.: Conceptualization, Methodology, Software, Writing—Original Draft, Writing—Review & Editing, Supervision. K.X.: Validation, Investigation, Project administration. T.L.: Data Curation, Resources, Funding acquisition. M.X.: Formal analysis. F.S.: Visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No.U1813222, No.42075129), Tianjin Natural Science Foundation (No.18JCYBJC16500) and Key Research and Development Project from Hebei Province(No.19210404D, No.20351802D).

Data Availability Statement

All data are available upon request from the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The main abbreviations in this paper are shown below.
ELMExtreme Learning Machines
SSELMSemi-Supervised Extreme Learning Machine
JRSSELMJoint Regularized Semi-Supervised Extreme Learning Machine
MPAMarine Predator Algorithm
MSMPAMulti-Strategy Marine Predator Algorithm
PSOParticle Swarm Optimization
GWOGray Wolf Optimization
MFOMoth Flame Optimization
SOASeagull Optimization Algorithm
SCASine Cosine Algorithm
WOAWhale Optimization Algorithm
COACoyote Optimization Algorithm
CPACarnivorous Plant Algorithm
TSATransient Search Algorithm
RFRandom Forests
SVMSupport Vector Machines
LapSVMLaplace Support Vector Machines
FADSFish Aggregation Device Effects
NDLNeighborhood Dimensional Learning
GRnatural gamma
DTacoustic time difference
SPnatural potential
LLDdeep lateral resistivity
LLSshallow lateral resistivity
DENcompensation density
KPotassium
ACacoustic time difference
RTin situ ground resistivity
RXOflush zone resistivity

Appendix A

The experimental results in Section 3 are presented below, including statistics for each algorithm, Wilcoxon’s rank sum test results, convergence curves, and box line plots.
Figure A1. Convergence curves for 18 benchmark functions.
Figure A1. Convergence curves for 18 benchmark functions.
Mathematics 09 00291 g0a1aMathematics 09 00291 g0a1b
Figure A2. Box line diagram of partially selected functions.
Figure A2. Box line diagram of partially selected functions.
Mathematics 09 00291 g0a2
Table A1. Test results of 18 benchmark functions.
Table A1. Test results of 18 benchmark functions.
FunctionCriteriaMSMPAMPAPSOGWOWOAMFOSOASCA
F1Ave0.00 × 10+002.69 × 10−466.83 × 10+006.68 × 10−441.04 × 10−1477.07 × 10+031.51 × 10−202.49 × 10+02
Std0.00 × 10+005.14 × 10−462.36 × 10+006.83 × 10−445.56 × 10−1478.97 × 10+036.40 × 10−204.54 × 10+02
Max0.00 × 10+002.51 × 10−451.32 × 10+012.82 × 10−433.10 × 10−1463.00 × 10+043.58 × 10−192.10 × 10+03
Min0.00 × 10+008.73 × 10−502.58 × 10+001.17 × 10−457.10 × 10−1695.58 × 10−018.03 × 10−243.35 × 10−01
F2Ave0.00 × 10+001.54 × 10−261.07 × 10+014.86 × 10−261.53 × 10−1027.23 × 10+015.22 × 10−143.62 × 10−02
Std0.00 × 10+001.84 × 10−261.99 × 10+007.31 × 10−267.40 × 10−1023.59 × 10+014.46 × 10−141.07 × 10−01
Max0.00 × 10+006.62 × 10−261.43 × 10+014.06 × 10−254.13 × 10−1011.60 × 10+021.68 × 10−136.03 × 10−01
Min0.00 × 10+002.69 × 10−296.18 × 10+006.58 × 10−272.77 × 10−1131.02 × 10+012.92 × 10−151.68 × 10−04
F3Ave0.00 × 10+001.41 × 10−071.14 × 10+034.79 × 10−051.26 × 10+055.10 × 10+045.27 × 10−083.40 × 10+04
Std0.00 × 10+004.73 × 10−072.68 × 10+022.41 × 10−043.33 × 10+042.18 × 10+041.54 × 10−071.27 × 10+04
Max0.00 × 10+001.94 × 10−061.72 × 10+031.35 × 10−031.92 × 10+059.65 × 10+047.80 × 10−076.49 × 10+04
Min0.00 × 10+001.15 × 10−154.71 × 10+029.28 × 10−106.77 × 10+041.72 × 10+042.45 × 10−139.15 × 10+03
F4Ave0.00 × 10+001.34 × 10−173.27 × 10+001.54 × 10−096.31 × 10+018.44 × 10+012.51 × 10−015.97 × 10+01
Std0.00 × 10+001.04 × 10−173.66 × 10−011.77 × 10−092.58 × 10+014.68 × 10+007.91 × 10−018.01 × 10+00
Max0.00 × 10+004.37 × 10−174.44 × 10+007.16 × 10−099.30 × 10+019.22 × 10+014.09 × 10+007.34 × 10+01
Min0.00 × 10+002.09 × 10−182.30 × 10+001.35 × 10−106.17 × 10+007.73 × 10+016.95 × 10−074.50 × 10+01
F5Ave7.20 × 10−064.44 × 10+013.46 × 10+034.73 × 10+014.77 × 10+011.34 × 10+074.84 × 10+016.32 × 10+05
Std1.10 × 10−057.72 × 10−011.57 × 10+037.86 × 10−015.31 × 10−013.62 × 10+073.98 × 10−019.81 × 10+05
Max4.12 × 10−054.77 × 10+017.11 × 10+034.86 × 10+014.86 × 10+011.60 × 10+084.88 × 10+014.37 × 10+06
Min3.86 × 10−124.32 × 10+011.32 × 10+034.61 × 10+014.67 × 10+014.03 × 10+024.72 × 10+014.06 × 10+03
F6Ave7.49 × 10−071.13 × 10−026.38 × 10+002.40 × 10+004.11 × 10−018.09 × 10+037.19 × 10+001.75 × 10+02
Std9.73 × 10−074.31 × 10−022.08 × 10+005.60 × 10−012.24 × 10−019.11 × 10+035.36 × 10−013.22 × 10+02
Max3.87 × 10−062.16 × 10−011.25 × 10+013.25 × 10+001.04 × 10+004.02 × 10+048.13 × 10+001.22 × 10+03
Min1.05 × 10−083.24 × 10−083.04 × 10+001.25 × 10+008.74 × 10−021.08 × 10+005.89 × 10+008.70 × 10+00
F7Ave2.48 × 10−056.96 × 10−041.63 × 10+021.24 × 10−031.50 × 10−031.56 × 10+011.62 × 10−039.81 × 10−01
Std1.95 × 10−053.03 × 10−046.65 × 10+016.62 × 10−041.77 × 10−031.87 × 10+011.59 × 10−031.29 × 10+00
Max8.65 × 10−051.17 × 10−033.12 × 10+022.75 × 10−037.74 × 10−037.84 × 10+017.56 × 10−035.10 × 10+00
Min8.50 × 10−071.76 × 10−044.01 × 10+014.42 × 10−042.61 × 10−064.46 × 10−012.76 × 10−042.77 × 10−02
F8Ave−4.67 × 10+04−1.51 × 10+04−9.99 × 10+03−9.71 × 10+03−1.86 × 10+04−1.31 × 10+04−7.25 × 10+03−5.02 × 10+03
Std1.12 × 10+046.77 × 10+022.01 × 10+038.12 × 10+022.60 × 10+031.27 × 10+031.09 × 10+033.08 × 10+02
Max−1.85 × 10+04−1.39 × 10+04−4.30 × 10+03−8.11 × 10+03−1.32 × 10+04−1.10 × 10+04−5.82 × 10+03−4.61 × 10+03
Min−5.45 × 10+04−1.63 × 10+04−1.25 × 10+04−1.12 × 10+04−2.09 × 10+04−1.51 × 10+04−1.00 × 10+04−5.68 × 10+03
F9Ave0.00 × 10+000.00 × 10+003.18 × 10+023.24 × 10−010.00 × 10+002.98 × 10+021.16 × 10−016.41 × 10+01
Std0.00 × 10+000.00 × 10+004.37 × 10+011.39 × 10+000.00 × 10+006.59 × 10+016.23 × 10−014.72 × 10+01
Max0.00 × 10+000.00 × 10+004.11 × 10+027.48 × 10+000.00 × 10+004.26 × 10+023.47 × 10+001.74 × 10+02
Min0.00 × 10+000.00 × 10+002.25 × 10+020.00 × 10+000.00 × 10+001.73 × 10+020.00 × 10+001.32 × 10−01
F10Ave8.88 × 10−164.32 × 10−153.18 × 10+003.32 × 10−143.73 × 10−151.95 × 10+012.00 × 10+011.94 × 10+01
Std0.00 × 10+006.38 × 10−163.72 × 10−014.71 × 10−152.81 × 10−155.73 × 10−019.30 × 10−043.73 × 10+00
Max8.88 × 10−164.44 × 10−154.04 × 10+004.35 × 10−147.99 × 10−152.00 × 10+012.00 × 10+012.05 × 10+01
Min8.88 × 10−168.88 × 10−162.49 × 10+002.22 × 10−148.88 × 10−161.76 × 10+012.00 × 10+019.91 × 10−02
F11Ave0.00 × 10+000.00 × 10+001.90 × 10−012.40 × 10−038.03 × 10−037.33 × 10+014.70 × 10−031.31 × 10+00
Std0.00 × 10+000.00 × 10+007.04 × 10−027.28 × 10−032.42 × 10−027.82 × 10+011.35 × 10−027.09 × 10−01
Max0.00 × 10+000.00 × 10+004.10 × 10−012.71 × 10−028.76 × 10−022.71 × 10+026.16 × 10−024.82 × 10+00
Min0.00 × 10+000.00 × 10+009.82 × 10−020.00 × 10+000.00 × 10+007.64 × 10−010.00 × 10+006.90 × 10−01
F12Ave2.33 × 10−091.88 × 10−041.96 × 10−017.81 × 10−021.15 × 10−021.71 × 10+074.92 × 10−015.61 × 10+06
Std2.83 × 10−096.10 × 10−043.31 × 10−012.68 × 10−027.35 × 10−036.39 × 10+071.00 × 10−011.42 × 10+07
Max1.42 × 10−082.96 × 10−031.88 × 10+001.54 × 10−013.48 × 10−022.56 × 10+087.93 × 10−017.01 × 10+07
Min2.10 × 10−121.78 × 10−093.36 × 10−023.38 × 10−023.39 × 10−032.27 × 10+003.42 × 10−015.09 × 10+00
F13Ave4.59 × 10−089.99 × 10−021.96 × 10+001.92 × 10+006.35 × 10−014.10 × 10+073.95 × 10+003.43 × 10+06
Std5.09 × 10−089.80 × 10−025.29 × 10−013.47 × 10−012.93 × 10−011.23 × 10+082.44 × 10−015.41 × 10+06
Max2.06 × 10−073.73 × 10−012.82 × 10+002.45 × 10+001.41 × 10+004.10 × 10+084.46 × 10+002.59 × 10+07
Min5.46 × 10−118.88 × 10−081.14 × 10+007.63 × 10−011.92 × 10−012.27 × 10+013.54 × 10+006.89 × 10+02
F14Ave9.98 × 10−019.98 × 10−013.30 × 10+003.52 × 10+002.08 × 10+002.25 × 10+001.59 × 10+001.59 × 10+00
Std0.00 × 10+005.73 × 10−172.67 × 10+003.54 × 10+002.45 × 10+002.10 × 10+009.09 × 10−019.08 × 10−01
Max9.98 × 10−019.98 × 10−011.08 × 10+011.27 × 10+011.08 × 10+011.08 × 10+012.98 × 10+002.98 × 10+00
Min9.98 × 10−019.98 × 10−019.98 × 10−019.98 × 10−019.98 × 10−019.98 × 10−019.98 × 10−019.98 × 10−01
F15Ave0.00040.00030.00080.00380.00070.0010.00120.0009
Std0.000200.00030.00740.00050.000400.0004
Max0.00120.00030.00190.02040.00220.00230.00120.0015
Min0.00030.00030.00040.00030.00030.00060.00120.0003
F16Ave−10.1532−10.1532−7.7963−9.6475−8.6505−8.0628−3.5614−2.885
Std002.75641.51572.58313.03774.36771.8174
Max−10.1532−10.1532−2.6305−5.1003−0.881−2.6305−0.3507−0.4965
Min−10.1532−10.1532−10.1532−10.1531−10.153−10.1532−10.1373−4.9475
F17Ave−10.4028−10.4028−10.2258−10.2267−8.5934−8.7168−6.0807−4.1795
Std000.95410.94672.78463.0654.50852.4846
Max−10.4028−10.4028−5.0877−5.1284−2.7655−2.7659−0.3724−0.5224
Min−10.4028−10.4028−10.4028−10.4028−10.4028−10.4028−10.4005−9.5476
F18Ave−10.5363−10.5363−9.8201−10.536−9.014−8.2934−8.0033−4.7824
Std001.82630.00022.54923.2253.9381.5385
Max−10.5363−10.5363−5.1285−10.5354−2.8064−2.4273−0.5542−0.9448
Min−10.5363−10.5363−10.5363−10.5363−10.5363−10.5363−10.5342−8.8356
Friedman Average Rank1.65512.41014.90584.47754.32255.39865.76677.0638
Rank12543678
Table A2. CEC2017 optimization function results.
Table A2. CEC2017 optimization function results.
FunctionCriteriaMSMPAMPAPSOGWOWOAMFOSOASCA
CF1Ave1.00 × 10+021.00 × 10+022.67 × 10+032.46 × 10+061.37 × 10+104.68 × 10+061.84 × 10+086.00 × 10+08
Std1.58 × 10−056.00 × 10−032.86 × 10+036.44 × 10+065.50 × 10+091.75 × 10+071.83 × 10+082.07 × 10+08
CF2Ave2.00 × 10+022.00 × 10+022.00 × 10+023.68 × 10+061.57 × 10+121.04 × 10+081.29 × 10+077.48 × 10+06
Std0.00 × 10+000.00 × 10+000.00 × 10+001.08 × 10+075.97 × 10+124.01 × 10+081.78 × 10+071.45 × 10+07
CF3Ave3.00 × 10+023.00 × 10+023.00 × 10+028.85 × 10+022.36 × 10+043.13 × 10+031.47 × 10+031.06 × 10+03
Std4.01 × 10−102.83 × 10−082.29 × 10−081.15 × 10+031.33 × 10+044.97 × 10+031.08 × 10+034.75 × 10+02
CF4Ave4.00 × 10+024.00 × 10+024.03 × 10+024.12 × 10+021.50 × 10+034.09 × 10+024.40 × 10+024.33 × 10+02
Std1.20 × 10−078.73 × 10−081.07 × 10+001.37 × 10+016.46 × 10+021.94 × 10+013.81 × 10+011.11 × 10+01
CF5Ave5.06 × 10+025.08 × 10+025.08 × 10+025.12 × 10+026.22 × 10+025.24 × 10+025.24 × 10+025.44 × 10+02
Std2.00 × 10+002.36 × 10+003.43 × 10+004.20 × 10+002.05 × 10+018.03 × 10+008.55 × 10+005.94 × 10+00
CF6Ave6.00 × 10+026.00 × 10+026.00 × 10+026.00 × 10+026.74 × 10+026.00 × 10+026.08 × 10+026.16 × 10+02
Std3.28 × 10−021.02 × 10−044.34 × 10−117.31 × 10−011.50 × 10+019.98 × 10−014.30 × 10+003.22 × 10+00
CF7Ave7.12 × 10+027.19 × 10+027.17 × 10+027.26 × 10+029.65 × 10+027.35 × 10+027.53 × 10+027.70 × 10+02
Std2.42 × 10+002.65 × 10+004.55 × 10+009.34 × 10+001.01 × 10+021.24 × 10+011.49 × 10+017.80 × 10+00
CF8Ave8.05 × 10+028.06 × 10+028.08 × 10+028.10 × 10+028.92 × 10+028.26 × 10+028.24 × 10+028.37 × 10+02
Std2.27 × 10+001.98 × 10+003.04 × 10+004.00 × 10+001.98 × 10+011.21 × 10+016.17 × 10+005.99 × 10+00
CF9Ave9.00 × 10+029.00 × 10+029.00 × 10+029.04 × 10+022.74 × 10+039.38 × 10+029.84 × 10+029.72 × 10+02
Std1.40 × 10−042.01 × 10−086.56 × 10−149.47 × 10+009.39 × 10+021.01 × 10+029.03 × 10+012.60 × 10+01
CF10Ave1.19 × 10+031.27 × 10+031.28 × 10+031.57 × 10+032.80 × 10+031.79 × 10+031.71 × 10+032.19 × 10+03
Std1.19 × 10+029.66 × 10+011.23 × 10+022.44 × 10+021.70 × 10+023.19 × 10+022.22 × 10+021.99 × 10+02
CF11Ave1.10 × 10+031.10 × 10+031.10 × 10+031.13 × 10+035.46 × 10+031.13 × 10+031.20 × 10+031.19 × 10+03
Std1.26 × 10+007.50 × 10−012.34 × 10+001.06 × 10+015.34 × 10+033.89 × 10+017.76 × 10+016.08 × 10+01
CF12Ave1.20 × 10+031.20 × 10+031.95 × 10+044.05 × 10+054.95 × 10+089.56 × 10+052.58 × 10+068.06 × 10+06
Std1.68 × 10+015.14 × 10+001.91 × 10+045.72 × 10+054.98 × 10+082.16 × 10+062.43 × 10+066.55 × 10+06
CF13Ave1.30 × 10+031.30 × 10+035.92 × 10+031.07 × 10+043.07 × 10+071.17 × 10+041.67 × 10+042.28 × 10+04
Std1.84 × 10+001.92 × 10+006.17 × 10+036.91 × 10+035.54 × 10+071.25 × 10+041.13 × 10+041.85 × 10+04
CF14Ave1.40 × 10+031.40 × 10+031.43 × 10+032.16 × 10+035.72 × 10+032.25 × 10+031.55 × 10+031.57 × 10+03
Std1.95 × 10+002.07 × 10+001.10 × 10+011.40 × 10+038.46 × 10+039.25 × 10+026.39 × 10+015.21 × 10+01
CF15Ave1.50 × 10+031.50 × 10+031.53 × 10+033.10 × 10+032.09 × 10+044.43 × 10+032.26 × 10+032.09 × 10+03
Std4.36 × 10−014.79 × 10−012.45 × 10+011.90 × 10+032.58 × 10+042.65 × 10+037.50 × 10+025.56 × 10+02
CF16Ave1.61 × 10+031.60 × 10+031.61 × 10+031.69 × 10+032.24 × 10+031.68 × 10+031.68 × 10+031.70 × 10+03
Std2.97 × 10+014.22 × 10−013.35 × 10+015.79 × 10+012.38 × 10+029.95 × 10+017.44 × 10+015.33 × 10+01
CF17Ave1.71 × 10+031.71 × 10+031.71 × 10+031.75 × 10+032.01 × 10+031.74 × 10+031.77 × 10+031.77 × 10+03
Std6.04 × 10+006.48 × 10+009.17 × 10+002.50 × 10+011.60 × 10+021.76 × 10+013.37 × 10+019.68 × 10+00
CF18Ave1.80 × 10+031.80 × 10+034.59 × 10+032.62 × 10+043.92 × 10+072.38 × 10+043.88 × 10+046.88 × 10+04
Std1.41 × 10+001.17 × 10+002.47 × 10+031.43 × 10+047.49 × 10+071.82 × 10+041.08 × 10+043.97 × 10+04
CF19Ave1.90 × 10+031.90 × 10+031.92 × 10+034.64 × 10+038.76 × 10+058.04 × 10+037.50 × 10+032.68 × 10+03
Std2.72 × 10−014.28 × 10−012.52 × 10+014.73 × 10+032.30 × 10+061.03 × 10+046.35 × 10+031.64 × 10+03
CF20Ave2.01 × 10+032.01 × 10+032.01 × 10+032.06 × 10+032.29 × 10+032.04 × 10+032.09 × 10+032.09 × 10+03
Std4.48 × 10+008.41 × 10+001.06 × 10+014.01 × 10+017.52 × 10+012.26 × 10+015.71 × 10+012.08 × 10+01
CF21Ave2.24 × 10+032.20 × 10+032.29 × 10+032.30 × 10+032.35 × 10+032.27 × 10+032.20 × 10+032.23 × 10+03
Std5.24 × 10+011.29 × 10−054.64 × 10+013.84 × 10+015.63 × 10+016.23 × 10+012.77 × 10+004.56 × 10+01
CF22Ave2.30 × 10+032.24 × 10+032.30 × 10+032.31 × 10+033.32 × 10+032.30 × 10+032.45 × 10+032.35 × 10+03
Std1.84 × 10+014.78 × 10+011.51 × 10+014.85 × 10+004.75 × 10+022.19 × 10+013.61 × 10+021.83 × 10+01
CF23Ave2.51 × 10+032.58 × 10+032.61 × 10+032.61 × 10+032.73 × 10+032.63 × 10+032.63 × 10+032.65 × 10+03
Std2.17 × 10+008.39 × 10+014.08 × 10+006.63 × 10+002.71 × 10+018.45 × 10+006.90 × 10+006.25 × 10+00
CF24Ave2.70 × 10+032.49 × 10+032.72 × 10+032.75 × 10+032.90 × 10+032.76 × 10+032.75 × 10+032.76 × 10+03
Std7.88 × 10+012.49 × 10+015.98 × 10+011.03 × 10+016.38 × 10+019.34 × 10+009.90 × 10+006.77 × 10+01
CF25Ave2.91 × 10+032.86 × 10+032.92 × 10+032.93 × 10+034.01 × 10+032.93 × 10+032.94 × 10+032.95 × 10+03
Std2.00 × 10+011.01 × 10+022.34 × 10+011.89 × 10+014.44 × 10+022.80 × 10+012.61 × 10+011.51 × 10+01
CF26Ave2.90 × 10+032.70 × 10+032.93 × 10+032.93 × 10+034.47 × 10+032.98 × 10+033.06 × 10+033.06 × 10+03
Std2.44 × 10−041.05 × 10+021.69 × 10+021.78 × 10+023.99 × 10+025.46 × 10+012.37 × 10+022.22 × 10+01
CF27Ave3.08 × 10+033.09 × 10+033.10 × 10+033.09 × 10+033.26 × 10+033.09 × 10+033.09 × 10+033.10 × 10+03
Std1.14 × 10+005.67 × 10−011.15 × 10+012.90 × 10+006.07 × 10+011.95 × 10+001.91 × 10+001.90 × 10+00
CF28Ave3.11 × 10+033.07 × 10+033.23 × 10+033.35 × 10+033.69 × 10+033.25 × 10+033.23 × 10+033.25 × 10+03
Std3.23 × 10+019.00 × 10+011.59 × 10+028.68 × 10+011.15 × 10+028.01 × 10+018.22 × 10+015.64 × 10+01
CF29Ave3.14 × 10+033.13 × 10+033.16 × 10+033.17 × 10+033.58 × 10+033.20 × 10+033.18 × 10+033.21 × 10+03
Std7.29 × 10+009.20 × 10+001.90 × 10+011.98 × 10+011.34 × 10+024.05 × 10+012.57 × 10+012.17 × 10+01
CF30Ave3.31 × 10+033.40 × 10+036.02 × 10+055.65 × 10+051.23 × 10+075.64 × 10+057.35 × 10+046.44 × 10+05
Std3.66 × 10+014.54 × 10+008.68 × 10+056.42 × 10+051.03 × 10+075.53 × 10+051.25 × 10+054.64 × 10+05
Friedman Average Rank1.61672.44892.90564.65617.93004.77285.39786.2722
Rank12348567
Table A3. p-values obtained from Wilcoxon’s rank sum test for MSMPA and other algorithms.
Table A3. p-values obtained from Wilcoxon’s rank sum test for MSMPA and other algorithms.
Function MSMPA VS.
MPAPSOGWOWOAMFOSOASCA
F1p-value1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F2p-value1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F3p-value1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F4p-value1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F5p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F6p-value8.56 × 10−043.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F7p-value3.02 × 10−113.02 × 10−113.02 × 10−111.07 × 10−093.02 × 10−113.02 × 10−113.02 × 10−11
F8p-value3.02 × 10−113.02 × 10−113.02 × 10−118.10 × 10−103.02 × 10−113.02 × 10−113.02 × 10−11
F9p-valueNaN1.21 × 10−121.14 × 10−05NaN1.21 × 10−122.79 × 10−031.21 × 10−12
F10p-value1.17 × 10−131.21 × 10−128.56 × 10−131.97 × 10−061.21 × 10−121.21 × 10−121.21 × 10−12
F11p-valueNaN1.21 × 10−128.15 × 10−028.15 × 10−021.21 × 10−125.58 × 10−031.21 × 10−12
F12p-value1.25 × 10−073.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F13p-value8.15 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F14p-value1.61 × 10−014.42 × 10−081.21 × 10−121.21 × 10−122.90 × 10−051.21 × 10−121.21 × 10−12
F15p-value1.05 × 10−056.59 × 10−091.30 × 10−092.69 × 10−091.08 × 10−092.73 × 10−111.30 × 10−09
F16p-value3.49 × 10−041.38 × 10−098.87 × 10−128.87 × 10−122.04 × 10−018.87 × 10−128.87 × 10−12
F17p-value4.18 × 10−023.03 × 10−034.08 × 10−124.08 × 10−125.20 × 10−024.08 × 10−124.08 × 10−12
F18p-value2.85 × 10−051.22 × 10−094.08 × 10−124.08 × 10−123.40 × 10−024.08 × 10−124.08 × 10−12
CF1p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.95 × 10−113.02 × 10−113.02 × 10−11
CF2p-valueNaNNaN4.57 × 10−121.21 × 10−125.60 × 10−111.21 × 10−121.21 × 10−12
CF3p-value3.02 × 10−113.50 × 10−033.02 × 10−113.02 × 10−118.77 × 10−013.02 × 10−113.02 × 10−11
CF4p-value1.09 × 10−013.02 × 10−113.02 × 10−113.02 × 10−112.86 × 10−113.02 × 10−113.02 × 10−11
CF5p-value1.06 × 10−035.82 × 10−031.25 × 10−073.02 × 10−113.02 × 10−114.62 × 10−103.02 × 10−11
CF6p-value3.02 × 10−119.37 × 10−129.35 × 10−013.02 × 10−113.46 × 10−043.02 × 10−113.02 × 10−11
CF7p-value5.86 × 10−061.87 × 10−055.49 × 10−013.02 × 10−112.57 × 10−073.02 × 10−113.02 × 10−11
CF8p-value6.35 × 10−025.94 × 10−051.25 × 10−073.02 × 10−112.15 × 10−103.02 × 10−113.02 × 10−11
CF9p-value3.02 × 10−111.14 × 10−113.02 × 10−113.02 × 10−117.26 × 10−023.02 × 10−113.02 × 10−11
CF10p-value1.38 × 10−022.32 × 10−027.12 × 10−093.02 × 10−118.99 × 10−114.97 × 10−113.02 × 10−11
CF11p-value2.03 × 10−077.96 × 10−014.50 × 10−113.02 × 10−117.09 × 10−083.02 × 10−113.02 × 10−11
CF12p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF13p-value1.56 × 10−089.92 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF14p-value7.77 × 10−092.02 × 10−083.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF15p-value4.69 × 10−086.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF16p-value4.42 × 10−062.68 × 10−066.72 × 10−103.34 × 10−113.96 × 10−088.10 × 10−104.20 × 10−10
CF17p-value3.82 × 10−091.04 × 10−041.33 × 10−103.02 × 10−114.80 × 10−074.08 × 10−113.02 × 10−11
CF18p-value1.60 × 10−073.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF19p-value3.02 × 10−114.50 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF20p-value2.90 × 10−015.59 × 10−014.50 × 10−113.02 × 10−111.07 × 10−073.02 × 10−113.02 × 10−11
CF21p-value4.29 × 10−011.41 × 10−042.38 × 10−074.11 × 10−073.99 × 10−041.86 × 10−014.21 × 10−02
CF22p-value1.78 × 10−101.68 × 10−041.00 × 10−033.02 × 10−111.62 × 10−016.74 × 10−065.07 × 10−10
CF23p-value2.12 × 10−013.63 × 10−012.15 × 10−063.02 × 10−113.34 × 10−113.34 × 10−113.02 × 10−11
CF24p-value4.62 × 10−104.11 × 10−079.76 × 10−101.96 × 10−103.02 × 10−118.89 × 10−103.96 × 10−08
CF25p-value4.50 × 10−112.77 × 10−057.60 × 10−073.02 × 10−111.39 × 10−067.66 × 10−054.11 × 10−07
CF26p-value2.61 × 10−106.44 × 10−098.48 × 10−093.02 × 10−119.47 × 10−063.02 × 10−113.02 × 10−11
CF27p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
CF28p-value1.67 × 10−013.04 × 10−019.92 × 10−113.02 × 10−114.59 × 10−102.92 × 10−093.02 × 10−11
CF29p-value3.03 × 10−031.17 × 10−055.00 × 10−093.02 × 10−117.38 × 10−105.07 × 10−103.02 × 10−11
CF30p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.01 × 10−11
Table A4. Test results for each high-dimensional function in dimension 100.
Table A4. Test results for each high-dimensional function in dimension 100.
FunctionCriteriaMSMPAMPAPSOGWOWOAMFOSOASCA
F1Ave0.00 × 10+003.85 × 10−439.41 × 10+011.84 × 10−297.06 × 10−1463.17 × 10+046.07 × 10−155.59 × 10+03
Std0.00 × 10+003.58 × 10−431.46 × 10+011.73 × 10−293.80 × 10−1451.25 × 10+041.14 × 10−144.77 × 10+03
Max0.00 × 10+001.50 × 10−421.37 × 10+026.52 × 10−292.12 × 10−1445.46 × 10+044.98 × 10−141.91 × 10+04
Min0.00 × 10+002.21 × 10−447.18 × 10+014.12 × 10−302.81 × 10−1691.43 × 10+041.68 × 10−172.73 × 10+01
F2Ave0.00 × 10+008.27 × 10−251.10 × 10+025.34 × 10−183.13 × 10−1011.79 × 10+028.13 × 10−111.99 × 10+00
Std0.00 × 10+001.44 × 10−242.24 × 10+013.05 × 10−181.63 × 10−1005.48 × 10+016.58 × 10−112.27 × 10+00
Max0.00 × 10+006.14 × 10−241.71 × 10+021.66 × 10−179.06 × 10−1003.30 × 10+022.30 × 10−109.10 × 10+00
Min0.00 × 10+002.27 × 10−276.46 × 10+012.06 × 10−181.63 × 10−1101.03 × 10+025.48 × 10−128.84 × 10−02
F3Ave0.00 × 10+002.49 × 10−021.39 × 10+041.06 × 10+018.92 × 10+051.80 × 10+052.11 × 10−021.92 × 10+05
Std0.00 × 10+009.79 × 10−022.69 × 10+032.70 × 10+012.40 × 10+054.35 × 10+048.57 × 10−024.22 × 10+04
Max0.00 × 10+005.42 × 10−012.00 × 10+041.41 × 10+021.45 × 10+062.80 × 10+054.77 × 10−012.93 × 10+05
Min0.00 × 10+001.13 × 10−071.00 × 10+042.38 × 10−024.91 × 10+051.04 × 10+053.52 × 10−071.12 × 10+05
F4Ave0.00 × 10+005.90 × 10−161.03 × 10+011.60 × 10−027.20 × 10+019.32 × 10+015.60 × 10+018.66 × 10+01
Std0.00 × 10+003.81 × 10−161.03 × 10+007.52 × 10−022.86 × 10+011.78 × 10+002.47 × 10+013.84 × 10+00
Max0.00 × 10+001.88 × 10−151.25 × 10+014.21 × 10−019.62 × 10+019.60 × 10+018.51 × 10+019.34 × 10+01
Min0.00 × 10+001.52 × 10−168.46 × 10+003.78 × 10−051.10 × 10−048.84 × 10+013.28 × 10+007.87 × 10+01
F5Ave2.25 × 10−059.57 × 10+011.04 × 10+059.76 × 10+019.78 × 10+016.84 × 10+079.86 × 10+015.17 × 10+07
Std3.40 × 10−051.11 × 10+003.00 × 10+046.31 × 10−014.12 × 10−015.92 × 10+071.90 × 10−013.00 × 10+07
Max1.32 × 10−049.78 × 10+012.10 × 10+059.85 × 10+019.83 × 10+012.62 × 10+089.88 × 10+011.21 × 10+08
Min7.28 × 10−159.41 × 10+016.35 × 10+049.61 × 10+019.70 × 10+013.05 × 10+069.80 × 10+011.04 × 10+07
F6Ave1.45 × 10−059.22 × 10−011.00 × 10+029.29 × 10+002.08 × 10+003.19 × 10+041.83 × 10+016.73 × 10+03
Std4.13 × 10−054.35 × 10−011.87 × 10+018.26 × 10−017.60 × 10−011.34 × 10+047.55 × 10−014.69 × 10+03
Max2.25 × 10−041.82 × 10+001.40 × 10+021.09 × 10+014.49 × 10+005.75 × 10+041.97 × 10+011.78 × 10+04
Min1.85 × 10−081.03 × 10−017.24 × 10+017.69 × 10+001.03 × 10+005.68 × 10+031.66 × 10+011.57 × 10+02
F7Ave3.26 × 10−057.46 × 10−041.41 × 10+032.48 × 10−032.60 × 10−031.81 × 10+022.95 × 10−036.74 × 10+01
Std2.41 × 10−053.55 × 10−042.64 × 10+027.68 × 10−042.09 × 10−031.27 × 10+021.82 × 10−033.73 × 10+01
Max1.05 × 10−042.00 × 10−032.07 × 10+034.45 × 10−037.12 × 10−035.70 × 10+027.85 × 10−031.55 × 10+02
Min1.16 × 10−063.16 × 10−049.92 × 10+021.29 × 10−037.26 × 10−054.24 × 10+013.40 × 10−048.02 × 10+00
F8Ave−1.02 × 10+05−2.76 × 10+04−2.14 × 10+04−1.59 × 10+04−3.77 × 10+04−2.34 × 10+04−1.12 × 10+04−7.30 × 10+03
Std1.36 × 10+049.19 × 10+023.47 × 10+032.36 × 10+034.85 × 10+031.94 × 10+031.49 × 10+035.79 × 10+02
Max−3.73 × 10+04−2.56 × 10+04−7.61 × 10+03−5.92 × 10+03−2.87 × 10+04−2.04 × 10+04−8.71 × 10+03−6.30 × 10+03
Min−1.09 × 10+05−2.99 × 10+04−2.65 × 10+04−2.01 × 10+04−4.19 × 10+04−2.75 × 10+04−1.48 × 10+04−8.72 × 10+03
F9Ave0.00 × 10+000.00 × 10+001.10 × 10+033.57 × 10−017.58 × 10−157.61 × 10+022.10 × 10−122.45 × 10+02
Std0.00 × 10+000.00 × 10+001.06 × 10+021.00 × 10+004.08 × 10−147.43 × 10+018.14 × 10−129.94 × 10+01
Max0.00 × 10+000.00 × 10+001.27 × 10+033.92 × 10+002.27 × 10−139.24 × 10+024.48 × 10−114.81 × 10+02
Min0.00 × 10+000.00 × 10+008.67 × 10+020.00 × 10+000.00 × 10+005.95 × 10+020.00 × 10+005.70 × 10+01
F10Ave8.88 × 10−164.44 × 10−155.45 × 10+001.12 × 10−133.73 × 10−151.98 × 10+012.00 × 10+011.93 × 10+01
Std0.00 × 10+000.00 × 10+002.94 × 10−011.09 × 10−142.66 × 10−152.26 × 10−012.26 × 10−043.81 × 10+00
Max8.88 × 10−164.44 × 10−155.93 × 10+001.47 × 10−137.99 × 10−152.00 × 10+012.00 × 10+012.07 × 10+01
Min8.88 × 10−164.44 × 10−154.71 × 10+009.33 × 10−148.88 × 10−161.92 × 10+012.00 × 10+015.48 × 10+00
F11Ave0.00 × 10+000.00 × 10+007.96 × 10−013.16 × 10−033.78 × 10−032.98 × 10+025.27 × 10−034.75 × 10+01
Std0.00 × 10+000.00 × 10+007.24 × 10−026.69 × 10−032.03 × 10−021.28 × 10+021.77 × 10−023.63 × 10+01
Max0.00 × 10+000.00 × 10+009.26 × 10−012.59 × 10−021.13 × 10−015.81 × 10+028.51 × 10−021.27 × 10+02
Min0.00 × 10+000.00 × 10+006.41 × 10−010.00 × 10+000.00 × 10+006.68 × 10+010.00 × 10+003.69 × 10+00
F12Ave3.17 × 10−099.65 × 10−035.95 × 10+002.45 × 10−011.50 × 10−025.35 × 10+077.53 × 10−011.63 × 10+08
Std1.26 × 10−084.30 × 10−033.16 × 10+006.61 × 10−025.90 × 10−036.46 × 10+076.75 × 10−029.46 × 10+07
Max7.02 × 10−081.98 × 10−021.53 × 10+014.50 × 10−012.84 × 10−022.65 × 10+088.78 × 10−013.77 × 10+08
Min2.48 × 10−153.03 × 10−032.84 × 10+001.61 × 10−018.09 × 10−037.58 × 10+056.21 × 10−011.73 × 10+07
F13Ave1.40 × 10−077.00 × 10+001.02 × 10+026.04 × 10+001.75 × 10+002.97 × 10+088.98 × 10+003.38 × 10+08
Std3.46 × 10−071.76 × 10+003.79 × 10+014.62 × 10−016.43 × 10−012.67 × 10+082.88 × 10−012.02 × 10+08
Max1.44 × 10−068.86 × 10+001.99 × 10+026.84 × 10+003.40 × 10+009.02 × 10+089.63 × 10+008.89 × 10+08
Min8.81 × 10−142.32 × 10+005.92 × 10+015.25 × 10+007.38 × 10−011.48 × 10+078.36 × 10+008.08 × 10+07
Friedman Average Rank1.21922.56546.0026 3.9192 3.2449 7.0949 4.9436 7.0103
Rank12643857
Table A5. Test results for each high-dimensional function in dimension 200.
Table A5. Test results for each high-dimensional function in dimension 200.
FunctionCriteriaMSMPAMPAPSOGWOWOAMFOSOASCA
F1Ave0.00 × 10+003.65 × 10−416.66 × 10+024.56 × 10−202.02 × 10−1451.85 × 10+057.43 × 10−123.09 × 10+04
Std0.00 × 10+005.44 × 10−417.06 × 10+013.74 × 10−201.09 × 10−1442.68 × 10+049.09 × 10−121.20 × 10+04
Max0.00 × 10+002.13 × 10−407.89 × 10+021.78 × 10−196.05 × 10−1442.51 × 10+053.52 × 10−114.97 × 10+04
Min0.00 × 10+005.40 × 10−435.44 × 10+026.38 × 10−213.67 × 10−1671.28 × 10+051.61 × 10−138.45 × 10+03
F2Ave0.00 × 10+006.55 × 10−241.66 × 10+211.41 × 10−121.40 × 10−1015.62 × 10+026.40 × 10−091.61 × 10+01
Std0.00 × 10+008.57 × 10−248.94 × 10+214.75 × 10−137.52 × 10−1015.37 × 10+015.07 × 10−091.46 × 10+01
Max0.00 × 10+003.26 × 10−234.98 × 10+222.44 × 10−124.19 × 10−1006.90 × 10+021.88 × 10−087.48 × 10+01
Min0.00 × 10+008.37 × 10−275.42 × 10+025.81 × 10−139.51 × 10−1164.59 × 10+027.53 × 10−107.98 × 10−01
F3Ave0.00 × 10+001.82 × 10+018.16 × 10+043.22 × 10+034.56 × 10+067.31 × 10+051.74 × 10+028.38 × 10+05
Std0.00 × 10+004.51 × 10+012.07 × 10+042.46 × 10+031.42 × 10+061.37 × 10+053.79 × 10+021.51 × 10+05
Max0.00 × 10+002.50 × 10+021.62 × 10+051.10 × 10+047.63 × 10+061.02 × 10+061.77 × 10+031.17 × 10+06
Min0.00 × 10+001.83 × 10−055.26 × 10+041.67 × 10+021.70 × 10+064.24 × 10+058.97 × 10−035.52 × 10+05
F4Ave0.00 × 10+001.01 × 10−141.95 × 10+019.86 × 10+007.72 × 10+019.70 × 10+019.20 × 10+019.51 × 10+01
Std0.00 × 10+004.83 × 10−151.53 × 10+005.31 × 10+002.46 × 10+017.95 × 10−013.87 × 10+001.34 × 10+00
Max0.00 × 10+002.33 × 10−142.41 × 10+012.33 × 10+019.83 × 10+019.87 × 10+019.81 × 10+019.69 × 10+01
Min0.00 × 10+003.70 × 10−151.70 × 10+011.70 × 10+002.14 × 10+019.53 × 10+017.82 × 10+019.18 × 10+01
F5Ave1.02 × 10−041.96 × 10+021.66 × 10+061.98 × 10+021.97 × 10+026.50 × 10+081.99 × 10+023.48 × 10+08
Std2.67 × 10−049.30 × 10−012.85 × 10+056.41 × 10−012.29 × 10−011.09 × 10+081.44 × 10−011.06 × 10+08
Max1.42 × 10−031.97 × 10+022.15 × 10+061.98 × 10+021.98 × 10+029.02 × 10+081.99 × 10+026.70 × 10+08
Min1.70 × 10−131.94 × 10+021.14 × 10+061.96 × 10+021.97 × 10+024.56 × 10+081.98 × 10+021.85 × 10+08
F6Ave1.26 × 10−058.10 × 10+006.94 × 10+022.79 × 10+015.90 × 10+001.76 × 10+054.22 × 10+013.33 × 10+04
Std2.31 × 10−051.04 × 10+009.50 × 10+011.05 × 10+001.60 × 10+002.03 × 10+048.24 × 10−011.29 × 10+04
Max1.13 × 10−041.13 × 10+019.09 × 10+023.03 × 10+019.21 × 10+002.08 × 10+054.44 × 10+016.01 × 10+04
Min3.84 × 10−086.64 × 10+004.85 × 10+022.56 × 10+012.94 × 10+001.40 × 10+054.08 × 10+015.36 × 10+03
F7Ave2.76 × 10−058.44 × 10−047.55 × 10+034.41 × 10−031.83 × 10−031.85 × 10+034.48 × 10−039.36 × 10+02
Std2.47 × 10−052.82 × 10−049.21 × 10+021.57 × 10−032.02 × 10−034.40 × 10+022.51 × 10−033.30 × 10+02
Max8.58 × 10−051.42 × 10−039.37 × 10+037.72 × 10−035.60 × 10−032.87 × 10+031.09 × 10−021.77 × 10+03
Min3.62 × 10−073.35 × 10−045.62 × 10+031.58 × 10−033.43 × 10−051.02 × 10+037.65 × 10−043.59 × 10+02
F8Ave−2.09 × 10+05−4.89 × 10+04−3.92 × 10+04−2.95 × 10+04−7.18 × 10+04−3.97 × 10+04−1.55 × 10+04−1.04 × 10+04
Std2.57 × 10+041.67 × 10+038.71 × 10+032.16 × 10+031.08 × 10+043.44 × 10+032.77 × 10+037.08 × 10+02
Max−8.16 × 10+04−4.50 × 10+04−8.93 × 10+03−2.52 × 10+04−5.03 × 10+04−3.38 × 10+04−1.21 × 10+04−8.57 × 10+03
Min−2.18 × 10+05−5.28 × 10+04−5.31 × 10+04−3.35 × 10+04−8.38 × 10+04−4.85 × 10+04−2.32 × 10+04−1.19 × 10+04
F9Ave0.00 × 10+000.00 × 10+002.71 × 10+031.48 × 10+001.52 × 10−141.96 × 10+032.06 × 10+004.89 × 10+02
Std0.00 × 10+000.00 × 10+001.72 × 10+023.30 × 10+008.16 × 10−149.12 × 10+015.77 × 10+002.06 × 10+02
Max0.00 × 10+000.00 × 10+003.03 × 10+031.47 × 10+014.55 × 10−132.13 × 10+032.69 × 10+019.71 × 10+02
Min0.00 × 10+000.00 × 10+002.40 × 10+032.27 × 10−130.00 × 10+001.74 × 10+034.55 × 10−133.80 × 10+01
F10Ave8.88 × 10−164.44 × 10−158.26 × 10+001.35 × 10−114.09 × 10−152.00 × 10+012.00 × 10+011.85 × 10+01
Std0.00 × 10+000.00 × 10+003.02 × 10−013.57 × 10−122.12 × 10−151.56 × 10−021.73 × 10−044.14 × 10+00
Max8.88 × 10−164.44 × 10−159.01 × 10+002.03 × 10−117.99 × 10−152.00 × 10+012.00 × 10+012.07 × 10+01
Min8.88 × 10−164.44 × 10−157.69 × 10+006.78 × 10−128.88 × 10−161.99 × 10+012.00 × 10+018.37 × 10+00
F11Ave0.00 × 10+000.00 × 10+001.19 × 10+004.17 × 10−040.00 × 10+001.67 × 10+035.75 × 10−032.94 × 10+02
Std0.00 × 10+000.00 × 10+002.46 × 10−022.24 × 10−030.00 × 10+002.06 × 10+021.54 × 10−021.67 × 10+02
Max0.00 × 10+000.00 × 10+001.26 × 10+001.25 × 10−020.00 × 10+002.03 × 10+036.44 × 10−028.22 × 10+02
Min0.00 × 10+000.00 × 10+001.15 × 10+001.11 × 10−160.00 × 10+001.35 × 10+038.15 × 10−146.68 × 10+01
F12Ave4.26 × 10−104.80 × 10−021.35 × 10+024.84 × 10−012.44 × 10−021.28 × 10+098.91 × 10−019.91 × 10+08
Std1.71 × 10−096.59 × 10−031.35 × 10+025.06 × 10−029.13 × 10−033.11 × 10+083.28 × 10−023.66 × 10+08
Max9.41 × 10−096.18 × 10−026.02 × 10+025.71 × 10−015.21 × 10−022.14 × 10+099.56 × 10−011.82 × 10+09
Min1.31 × 10−193.57 × 10−023.67 × 10+013.88 × 10−019.73 × 10−038.47 × 10+088.10 × 10−013.88 × 10+08
F13Ave1.35 × 10−071.80 × 10+012.17 × 10+041.61 × 10+014.29 × 10+002.48 × 10+091.90 × 10+011.56 × 10+09
Std4.51 × 10−074.03 × 10−019.83 × 10+034.78 × 10−011.33 × 10+005.53 × 10+082.17 × 10−015.53 × 10+08
Max2.39 × 10−061.87 × 10+014.99 × 10+041.70 × 10+017.36 × 10+003.76 × 10+091.97 × 10+012.81 × 10+09
Min2.74 × 10−201.71 × 10+018.75 × 10+031.51 × 10+011.48 × 10+001.47 × 10+091.85 × 10+016.37 × 10+08
Friedman Average Rank1.16152.64876.02564.03722.94107.20515.10386.8769
Rank12643857
Table A6. Test results for each high-dimensional function with dimension 500.
Table A6. Test results for each high-dimensional function with dimension 500.
FunctionCriteriaMSMPAMPAPSOGWOWOAMFOSOASCA
F1Ave0.00 × 10+004.83 × 10−397.34 × 10+031.47 × 10−122.28 × 10−1459.71 × 10+055.54 × 10−091.45 × 10+05
Std0.00 × 10+005.89 × 10−394.01 × 10+027.71 × 10−131.22 × 10−1443.73 × 10+045.53 × 10−094.32 × 10+04
Max0.00 × 10+002.68 × 10−387.94 × 10+033.35 × 10−126.82 × 10−1441.04 × 10+061.81 × 10−082.37 × 10+05
Min0.00 × 10+009.09 × 10−416.32 × 10+034.23 × 10−133.36 × 10−1649.02 × 10+057.71 × 10−114.46 × 10+04
F2Ave4.63 × 10−363.83 × 10−122.47 × 10+1345.89 × 10−084.95 × 10−992.24 × 10+032.05 × 10−077.28 × 10+01
Std1.57 × 10−352.06 × 10−111.33 × 10+1351.37 × 10−082.66 × 10−989.70 × 10+011.80 × 10−074.68 × 10+01
Max7.61 × 10−351.15 × 10−107.42 × 10+1359.51 × 10−081.48 × 10−972.37 × 10+038.30 × 10−072.15 × 10+02
Min6.78 × 10−667.25 × 10−254.29 × 10+283.55 × 10−087.77 × 10−1132.00 × 10+033.58 × 10−081.47 × 10+01
F3Ave0.00 × 10+001.63 × 10+035.50 × 10+051.31 × 10+052.87 × 10+073.97 × 10+063.07 × 10+045.83 × 10+06
Std0.00 × 10+002.12 × 10+031.15 × 10+054.40 × 10+049.45 × 10+066.08 × 10+055.00 × 10+041.09 × 10+06
Max0.00 × 10+007.46 × 10+039.02 × 10+052.27 × 10+055.00 × 10+075.65 × 10+062.34 × 10+058.20 × 10+06
Min0.00 × 10+001.85 × 10+013.77 × 10+055.61 × 10+041.40 × 10+073.03 × 10+069.77 × 10+002.78 × 10+06
F4Ave0.00 × 10+005.34 × 10−132.79 × 10+015.74 × 10+018.08 × 10+019.89 × 10+019.80 × 10+019.88 × 10+01
Std0.00 × 10+004.83 × 10−131.04 × 10+005.82 × 10+001.57 × 10+013.50 × 10−017.60 × 10−013.71 × 10−01
Max0.00 × 10+002.35 × 10−122.99 × 10+016.81 × 10+019.84 × 10+019.95 × 10+019.93 × 10+019.93 × 10+01
Min0.00 × 10+006.38 × 10−142.59 × 10+014.41 × 10+013.26 × 10+019.81 × 10+019.53 × 10+019.79 × 10+01
F5Ave2.22 × 10−044.96 × 10+025.16 × 10+074.98 × 10+024.96 × 10+024.01 × 10+094.99 × 10+021.48 × 10+09
Std8.38 × 10−044.67 × 10−015.90 × 10+062.36 × 10−013.08 × 10−012.39 × 10+085.87 × 10−022.87 × 10+08
Max4.62 × 10−034.96 × 10+026.31 × 10+074.98 × 10+024.97 × 10+024.32 × 10+094.99 × 10+022.00 × 10+09
Min1.16 × 10−214.94 × 10+023.78 × 10+074.97 × 10+024.95 × 10+023.48 × 10+094.98 × 10+029.03 × 10+08
F6Ave1.77 × 10−055.21 × 10+017.37 × 10+039.29 × 10+012.04 × 10+019.54 × 10+051.16 × 10+021.70 × 10+05
Std3.55 × 10−051.74 × 10+004.36 × 10+021.72 × 10+007.52 × 10+003.83 × 10+048.53 × 10−016.92 × 10+04
Max1.93 × 10−045.63 × 10+018.27 × 10+039.58 × 10+014.01 × 10+011.07 × 10+061.17 × 10+023.91 × 10+05
Min1.32 × 10−104.92 × 10+016.30 × 10+038.98 × 10+011.01 × 10+018.82 × 10+051.14 × 10+025.59 × 10+04
F7Ave3.04 × 10−051.30 × 10−035.68 × 10+041.25 × 10−022.16 × 10−033.08 × 10+049.87 × 10−031.13 × 10+04
Std2.37 × 10−057.40 × 10−042.14 × 10+034.40 × 10−032.16 × 10−031.95 × 10+036.62 × 10−032.83 × 10+03
Max1.06 × 10−042.77 × 10−036.12 × 10+042.35 × 10−021.17 × 10−023.40 × 10+043.47 × 10−021.65 × 10+04
Min2.04 × 10−061.28 × 10−045.19 × 10+046.37 × 10−032.64 × 10−052.72 × 10+042.55 × 10−035.74 × 10+03
F8Ave−5.35 × 10+05−9.78 × 10+04−9.25 × 10+04−5.96 × 10+04−1.92 × 10+05−7.43 × 10+04−2.65 × 10+04−1.59 × 10+04
Std1.93 × 10+042.46 × 10+032.53 × 10+049.63 × 10+032.29 × 10+046.21 × 10+035.83 × 10+031.01 × 10+03
Max−4.49 × 10+05−9.20 × 10+04−1.36 × 10+04−1.34 × 10+04−1.45 × 10+05−6.32 × 10+04−1.89 × 10+04−1.41 × 10+04
Min−5.45 × 10+05−1.05 × 10+05−1.15 × 10+05−7.17 × 10+04−2.09 × 10+05−8.84 × 10+04−4.58 × 10+04−1.90 × 10+04
F9Ave0.00 × 10+000.00 × 10+007.90 × 10+035.58 × 10+000.00 × 10+006.47 × 10+034.53 × 10−011.24 × 10+03
Std0.00 × 10+000.00 × 10+002.80 × 10+026.52 × 10+000.00 × 10+001.62 × 10+022.03 × 10+005.14 × 10+02
Max0.00 × 10+000.00 × 10+008.34 × 10+032.38 × 10+010.00 × 10+006.85 × 10+031.11 × 10+012.37 × 10+03
Min0.00 × 10+000.00 × 10+006.99 × 10+037.46 × 10−110.00 × 10+006.21 × 10+032.73 × 10−123.61 × 10+02
F10Ave8.88 × 10−164.44 × 10−151.29 × 10+015.84 × 10−084.09 × 10−152.01 × 10+012.00 × 10+011.89 × 10+01
Std0.00 × 10+000.00 × 10+001.93 × 10−011.60 × 10−082.49 × 10−151.35 × 10−016.35 × 10−053.77 × 10+00
Max8.88 × 10−164.44 × 10−151.32 × 10+019.69 × 10−087.99 × 10−152.04 × 10+012.00 × 10+012.08 × 10+01
Min8.88 × 10−164.44 × 10−151.26 × 10+013.53 × 10−088.88 × 10−162.00 × 10+012.00 × 10+018.93 × 10+00
F11Ave0.00 × 10+000.00 × 10+00 3.47 × 10+001.22 × 10−030.00 × 10+008.69 × 10+031.69 × 10−031.55 × 10+03
Std0.00 × 10+000.00 × 10+00 1.24 × 10−014.84 × 10−030.00 × 10+004.05 × 10+026.37 × 10−035.95 × 10+02
Max0.00 × 10+000.00 × 10+00 3.75 × 10+002.45 × 10−020.00 × 10+009.32 × 10+032.87 × 10−023.26 × 10+03
Min0.00 × 10+000.00 × 10+00 3.27 × 10+007.19 × 10−140.00 × 10+007.66 × 10+033.62 × 10−125.10 × 10+02
F12Ave2.97 × 10−112.01 × 10−019.21 × 10+057.52 × 10−014.25 × 10−029.39 × 10+091.02 × 10+003.98 × 10+09
Std1.57 × 10−101.83 × 10−023.10 × 10+053.43 × 10−021.51 × 10−028.34 × 10+082.24 × 10−021.01 × 10+09
Max8.76 × 10−102.42 × 10−011.57 × 10+068.25 × 10−017.47 × 10−021.09 × 10+101.07 × 10+005.90 × 10+09
Min5.25 × 10−221.69 × 10−014.69 × 10+056.86 × 10−011.85 × 10−027.38 × 10+099.78 × 10−011.75 × 10+09
F13Ave3.83 × 10−134.75 × 10+018.79 × 10+064.60 × 10+011.06 × 10+011.76 × 10+104.95 × 10+016.93 × 10+09
Std1.53 × 10−125.31 × 10−011.67 × 10+065.50 × 10−013.28 × 10+001.39 × 10+095.33 × 10−011.57 × 10+09
Max8.49 × 10−124.84 × 10+011.17 × 10+074.69 × 10+011.82 × 10+012.00 × 10+105.05 × 10+019.33 × 10+09
Min1.55 × 10−224.64 × 10+015.43 × 10+064.51 × 10+015.31 × 10+001.41 × 10+104.87 × 10+013.15 × 10+09
Friedman Average Rank1.22512.69365.92314.21032.72057.26925.03336.8949
Rank12643857

References

  1. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. In Modeling and Optimization in Science and Technologies; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  2. Wei, C.-L.; Wang, G.-G. Hybrid Annealing Krill Herd and Quantum-Behaved Particle Swarm Optimization. Mathematics 2020, 8, 1403. [Google Scholar] [CrossRef]
  3. Blum, C.; Li, X. Swarm Intelligence in Optimization. In Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  4. Fister, I.; Yang, X.S.; Brest, J.; Fister, D. A Brief Review of Nature-Inspired Algorithms for Optimization. arXiv 2013, arXiv:1307.4186. [Google Scholar]
  5. Brezočnik, L.; Fister, I.; Podgorelec, V. Swarm Intelligence Algorithms for Feature Selection: A Review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef] [Green Version]
  6. Omran, M.G.H. Particle Swarm Optimization Methods for Pattern Recognition and Image Processing. Ph.D. Thesis, University of Pretoria, Pretoria, South Africa, 2004. [Google Scholar]
  7. Martens, D.; Baesens, B.; Fawcett, T. Editorial survey: Swarm intelligence for data mining. Mach. Learn. 2011, 82, 1–42. [Google Scholar] [CrossRef] [Green Version]
  8. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  9. Alejo-Reyes, A.; Cuevas, E.; Rodríguez, A.; Mendoza, A.; Olivares-Benitez, E. An Improved Grey Wolf Optimizer for a Supplier Selection and Order Quantity Allocation Problem. Mathematics 2020, 8, 1457. [Google Scholar] [CrossRef]
  10. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl.-Based Syst. 2015, 89. [Google Scholar] [CrossRef]
  11. Dhiman, G.; Kumar, V. Seagull Optimization Algorithm: Theory and Its Applications for Large-Scale Industrial Engineering Problems. Knowl.-Based Syst. 2019, 165. [Google Scholar] [CrossRef]
  12. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96. [Google Scholar] [CrossRef]
  13. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95. [Google Scholar] [CrossRef]
  14. Pierezan, J.; Dos Santos Coelho, L. Coyote Optimization Algorithm: A New Metaheuristic for Global Optimization Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation, Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  15. Meng, O.K.; Pauline, O.; Kiong, S.C. A Carnivorous Plant Algorithm for Solving Global Optimization Problems. Appl. Soft Comput. 2021, 98. [Google Scholar] [CrossRef]
  16. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Transient Search Optimization: A New Meta-Heuristic Optimization Algorithm. Appl. Intell. 2020, 50. [Google Scholar] [CrossRef]
  17. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-Inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  18. Elaziz, M.A.; Ewees, A.A.; Yousri, D.; Alwerfali, H.S.N.; Awad, Q.A.; Lu, S.; Al-Qaness, M.A.A. An Improved Marine Predators Algorithm with Fuzzy Entropy for Multi-Level Thresholding: Real World Example of COVID-19 CT Image Segmentation. IEEE Access 2020, 8, 125306–125330. [Google Scholar] [CrossRef]
  19. Abdel-Basset, M.; Mohamed, R.; Elhoseny, M.; Chakrabortty, R.K.; Ryan, M. A Hybrid COVID-19 Detection Model Using an Improved Marine Predators Algorithm and a Ranking-Based Diversity Reduction Strategy. IEEE Access 2020, 8, 79521–79540. [Google Scholar] [CrossRef]
  20. Naga, J.; Naraharisetti, L.; Devarapalli, R. Environmental Effects Parameter Extraction of Solar Photovoltaic Module by Using a Novel Hybrid Marine Predators—Success History Based Adaptive Differential Evolution Algorithm. Energy Sources Part A Recovery Util. Environ. Eff. 2020, 1, 1–23. [Google Scholar] [CrossRef]
  21. Ridha, H.M. Parameters Extraction of Single and Double Diodes Photovoltaic Models Using Marine Predators Algorithm and Lambert W Function. Sol. Energy 2020, 209, 674–693. [Google Scholar] [CrossRef]
  22. Yousri, D.; Babu, T.S.; Beshr, E.; Eteiba, M.B.; Allam, D. A Robust Strategy Based on Marine Predators Algorithm for Large Scale Photovoltaic Array Reconfiguration to Mitigate the Partial Shading Effect on the Performance of PV System. IEEE Access 2020, 4. [Google Scholar] [CrossRef]
  23. Soliman, M.A.; Hasanien, H.M.; Alkuhayli, A. Marine Predators Algorithm for Parameters Identification of Triple-Diode Photovoltaic Models. IEEE Access 2020, 8, 155832–155842. [Google Scholar] [CrossRef]
  24. Ebeed, M.; Alhejji, A.; Kamel, S.; Jurado, F. Solving the Optimal Reactive Power Dispatch Using Marine Predators Algorithm Considering the Uncertainties in Load and Wind-Solar Generation Systems. Energies 2020, 13, 4316. [Google Scholar] [CrossRef]
  25. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-qaness, M.A.A.; Damasevicius, R.; Elaziz, M.A. COVID-19 Image Classification Using Deep Features and Fractional-Order Marine Predators Algorithm. Sci. Rep. 2020, 10, 1–15. [Google Scholar] [CrossRef]
  26. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  27. Çevik, A.; Kurtoğlu, A.E.; Bilgehan, M.; Gülşan, M.E.; Albegmprli, H.M. Support Vector Machines in Structural Engineering: A Review. J. Civ. Eng. Manag. 2015, 21, 261–281. [Google Scholar] [CrossRef]
  28. Cambria, E.; Huang, G.-B.; Kasun, L.L.C.; Zhou, H.; Vong, C.M.; Lin, J.; Yin, J.; Cai, Z.; Liu, Q.; Li, K.; et al. Extreme Learning Machines [Trends & Controversies]. IEEE Intell. Syst. 2013, 28, 30–59. [Google Scholar] [CrossRef]
  29. Deng, W.; Guo, Y.; Liu, J.; Li, Y.; Liu, D.; Zhu, L. A Missing Power Data Filling Method Based on Improved Random Forest Algorithm. Chin. J. Electr. Eng. 2019, 5, 33–39. [Google Scholar] [CrossRef]
  30. Paul, A.; Mukherjee, D.P.; Das, P.; Gangopadhyay, A.; Chintha, A.R.; Kundu, S. Improved Random Forest for Classification. IEEE Trans. Image Process. 2018, 27, 4012–4024. [Google Scholar] [CrossRef]
  31. Kalaiselvi, B.; Thangamani, M. An Efficient Pearson Correlation Based Improved Random Forest Classification for Protein Structure Prediction Techniques. Measurement 2020, 162, 107885. [Google Scholar] [CrossRef]
  32. Peng, S.; Hu, Q.; Chen, Y.; Dang, J. Improved Support Vector Machine Algorithm for Heterogeneous Data. Pattern Recognit. 2015, 48, 2072–2083. [Google Scholar] [CrossRef]
  33. Dong, H.; Yang, L.; Wang, X. Robust Semi-Supervised Support Vector Machines with Laplace Kernel-Induced Correntropy Loss Functions. Appl. Intell. 2021, 51, 819–833. [Google Scholar] [CrossRef]
  34. Huang, G.; Song, S.; Gupta, J.N.D.; Wu, C. Semi-Supervised and Unsupervised Extreme Learning Machines. IEEE Trans. Cybern. 2014, 44. [Google Scholar] [CrossRef]
  35. She, Q.; Zou, J.; Meng, M.; Fan, Y.; Luo, Z. Balanced Graph-Based Regularized Semi-Supervised Extreme Learning Machine for EEG Classification. Int. J. Mach. Learn. Cybern. 2020, 1. [Google Scholar] [CrossRef]
  36. Zhou, W.; Qiao, S.; Yi, Y.; Han, N.; Chen, Y.; Lei, G. Automatic Optic Disc Detection Using Low-Rank Representation Based Semi-Supervised Extreme Learning Machine. Int. J. Mach. Learn. Cybern. 2020, 11, 55–69. [Google Scholar] [CrossRef]
  37. She, Q.; Hu, B.; Luo, Z.; Nguyen, T.; Zhang, Y. A Hierarchical Semi-Supervised Extreme Learning Machine Method for EEG Recognition. Med. Biol. Eng. Comput. 2019, 57, 147–157. [Google Scholar] [CrossRef] [PubMed]
  38. Ma, J.; Yuan, C. Adaptive Safe Semi-Supervised Extreme Machine Learning. IEEE Access 2019, 7, 76176–76184. [Google Scholar] [CrossRef]
  39. Pei, H.; Wang, K.; Lin, Q.; Zhong, P. Robust Semi-Supervised Extreme Learning Machine. Knowl.-Based Syst. 2018, 159, 203–220. [Google Scholar] [CrossRef]
  40. Tuan, N.H.; Trong, D.D.; Quan, P.H. A Note on a Cauchy Problem for the Laplace Equation: Regularization and Error Estimates. Appl. Math. Comput. 2010, 217, 2913–2922. [Google Scholar] [CrossRef]
  41. Tao, D.; Jin, L.; Liu, W.; Li, X. Hessian Regularized Support Vector Machines for Mobile Image Annotation on the Cloud. IEEE Trans. Multimed. 2013, 15. [Google Scholar] [CrossRef]
  42. Mörters, P.; Peres, Y.; Schramm, O.; Werner, W. Brownian Motion; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  43. Reynolds, A.M.; Rhodes, C.J. The Lévy Flight Paradigm: Random Search Patterns and Mechanisms. Ecology 2009, 90. [Google Scholar] [CrossRef]
  44. Sun, Y.; Gao, Y.; Shi, X. Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity. Mathematics 2019, 7, 146. [Google Scholar] [CrossRef] [Green Version]
  45. Yang, X.S.; Deb, S. Cuckoo Search via Lévy Flights. In Proceedings of the 2009 World Congress on Nature and Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009. [Google Scholar]
  46. Li, C.; Luo, G.; Qin, K.; Li, C. An Image Encryption Scheme Based on Chaotic Tent Map. Nonlinear Dyn. 2017, 87. [Google Scholar] [CrossRef]
  47. Park, T.S.; Lee, J.H.; Choi, B. Optimization for Artificial Neural Network with Adaptive Inertial Weight of Particle Swarm Optimization. In Proceedings of the 2009 8th IEEE International Conference on Cognitive Informatics, Kowloon, Hong Kong, 15–17 June 2009. [Google Scholar]
  48. Li, M.; Chen, H.; Wang, X.; Zhong, N.; Lu, S. An Improved Particle Swarm Optimization Algorithm with Adaptive Inertia Weights. Int. J. Inf. Technol. Decis. Mak. 2019, 18. [Google Scholar] [CrossRef]
  49. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert Syst. Appl. 2021, 166. [Google Scholar] [CrossRef]
  50. Jamil, M.; Yang, X.S. A Literature Survey of Benchmark Functions for Global Optimisation Problems. Int. J. Math. Model. Numer. Optim. 2013, 4. [Google Scholar] [CrossRef] [Green Version]
  51. Kommadath, R.; Kotecha, P. Teaching Learning Based Optimization with Focused Learning and Its Performance on CEC2017 Functions. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, San Sebastian, Spain, 5–8 June 2017. [Google Scholar]
  52. Chen, H.; Yang, C.; Heidari, A.A.; Zhao, X. An Efficient Double Adaptive Random Spare Reinforced Whale Optimization Algorithm. Expert Syst. Appl. 2020, 154. [Google Scholar] [CrossRef]
  53. Derrac, J.; García, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1. [Google Scholar] [CrossRef]
  54. He, Z.; Xia, K.; Niu, W.; Aslam, N.; Hou, J. Semisupervised SVM Based on Cuckoo Search Algorithm and Its Application. Math. Probl. Eng. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  55. Bai, J.; Xia, K.; Lin, Y.; Wu, P. Attribute Reduction Based on Consistent Covering Rough Set and Its Application. Complexity 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  56. Gómez-Chova, L.; Camps-Valls, G.; Muñoz-Mari, J.; Calpe, J. Semisupervised Image Classification with Laplacian Support Vector Machines. IEEE Geosci. Remote Sens. Lett. 2008, 5. [Google Scholar] [CrossRef]
Figure 1. The plot of Adaptive Weights.
Figure 1. The plot of Adaptive Weights.
Mathematics 09 00291 g001
Figure 2. The plot of the adaptive step control factor.
Figure 2. The plot of the adaptive step control factor.
Mathematics 09 00291 g002
Figure 3. Block diagram of MSMPA-SSELM oil layer identification system.
Figure 3. Block diagram of MSMPA-SSELM oil layer identification system.
Mathematics 09 00291 g003
Figure 4. Normalized curves for each conditional property: (a,b) belong to well 1; (c,d) belong to well 2.
Figure 4. Normalized curves for each conditional property: (a,b) belong to well 1; (c,d) belong to well 2.
Mathematics 09 00291 g004
Figure 5. Original and predicted oil layer distribution for two wells, where (ac) is the distribution for well 1, and (df) is the distribution for well 2.
Figure 5. Original and predicted oil layer distribution for two wells, where (ac) is the distribution for well 1, and (df) is the distribution for well 2.
Mathematics 09 00291 g005
Table 1. Parameter settings for each algorithm.
Table 1. Parameter settings for each algorithm.
AlgorithmParameter Specific Settings
PSO c 1 = 2 , c 2 = 2
GWO a = 2 ( 1 t T max )
WOA a = 2 ( 1 t T max )
MFOConvergence Constant ∈ [−1, −2], Logarithmic Spiral:0.75
SOA f c = 2 , Control Parameter (A) ∈ [2, 0]
SCA r 1 = a t a T
MPA F A D s = 0.2 P = 0.5
MSMPA w = a cos b ( ln ( 1 + e Iter Max _ Iter ) + c P = m sin [ π 5 × ( p e n × Iter Max _ Iter ) ] + q F A D s = 0.2
Table 2. Description of the 18 classical benchmarking functions.
Table 2. Description of the 18 classical benchmarking functions.
TypeFunctionDimRangeOptimum Value
Unimodal F 1 ( x ) = i = 1 n x i 2 50[−100, 100]0
Unimodal F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 50[−10, 10]0
Unimodal F 3 ( x ) = i = 1 n ( j 1 i x j ) 2 50[−100, 100]0
Unimodal F 4 ( x ) = max i { | x i | , 1 i n } 50[−100, 100]0
Unimodal F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 50[−30, 30]0
Unimodal F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 50[−100, 100]0
Unimodal F 7 ( x ) = i = 1 n i x i 4 + random [ 0 , 1 ) 50[−1.28, 1.28]0
Multimodal F 8 ( x ) = i = 1 n x i sin ( | x i | ) 50[−500, 500]−418.9829 ×d
Multimodal F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 50[−5.12, 5.12]0
Multimodal F 10 ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 50[−32, 32]0
Multimodal F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 50[−600, 600]0
Multimodal F 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) w h e r e   y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { k ( x i a ) m   x i > a 0 a < x i < a k ( x i a ) m   x i < a 50[−50, 50]0
Multimodal F 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 50[−50, 50]0
Fixed Dimension F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65, 65]1
Fixed Dimension F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
Fixed Dimension F 16 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
Fixed Dimension F 17 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
Fixed Dimension F 18 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 3. CEC2017 numerical optimization functions.
Table 3. CEC2017 numerical optimization functions.
TypeNo.Function NameRangeOptimum Value
UnimodalCF1Shifted and Rotated Bent Cigar Function[−100, 100]100
UnimodalCF2Shifted and Rotated Sum of Different Power Function[−100, 100]200
UnimodalCF3Shifted and Rotated Zakharov Function[−100, 100]300
MultimodalCF4Shifted and Rotated Rosenbrock’s Function[−100, 100]400
MultimodalCF5Shifted and Rotated Rastrigin’s Function[−100, 100]500
MultimodalCF6Shifted and Rotated Expanded Scaffer’s F6 Function[−100, 100]600
MultimodalCF7Shifted and Rotated Lunacek Bi_Rastrigin Function[−100, 100]700
MultimodalCF8Shifted and Rotated Non-Continuous Rastrigin’s Function[−100, 100]800
MultimodalCF9Shifted and Rotated Levy Function[−100, 100]900
MultimodalCF10Shifted and Rotated Schwefel’s Function[−100, 100]1000
HybridCF11Hybrid Function 1 (N = 3)[−100, 100]1100
HybridCF12Hybrid Function 2 (N = 3)[−100, 100]1200
HybridCF13Hybrid Function 3 (N = 3)[−100, 100]1300
HybridCF14Hybrid Function 4 (N = 4)[−100, 100]1400
HybridCF15Hybrid Function 5 (N = 4)[−100, 100]1500
HybridCF16Hybrid Function 6 (N = 4)[−100, 100]1600
HybridCF17Hybrid Function 6 (N = 5)[−100, 100]1700
HybridCF18Hybrid Function 6 (N = 5)[−100, 100]1800
HybridCF19Hybrid Function 6 (N = 5)[−100, 100]1900
HybridCF20Hybrid Function 6 (N = 6)[−100, 100]2000
CompositionCF21Composition Function 1 (N = 3)[−100, 100]2100
CompositionCF22Composition Function 2 (N = 3)[−100, 100]2200
CompositionCF23Composition Function 3 (N = 4)[−100, 100]2300
CompositionCF24Composition Function 4 (N = 4)[−100, 100]2400
CompositionCF25Composition Function 5 (N = 5)[−100, 100]2500
CompositionCF26Composition Function 6 (N = 5)[−100, 100]2600
CompositionCF27Composition Function 7 (N = 6)[−100, 100]2700
CompositionCF28Composition Function 8 (N = 6)[−100, 100]2800
CompositionCF29Composition Function 9 (N = 3)[−100, 100]2900
CompositionCF30Composition Function 10 (N = 3)[−100, 100]3000
Table 4. Details of the data set division between the two wells.
Table 4. Details of the data set division between the two wells.
WellDepth (m)Training Set Test Set
Oil LayersDry LayersDepth (m)Oil LayersDry Layers
Well 13150~3330882473330~3460115981
Well 21180~1255451921230~130092469
Table 5. Results of attribute reduction.
Table 5. Results of attribute reduction.
WellAttributes
Original results (Well 1)U,TH,K,DT,SP,WQ,LLD,LLS,CALI,GR,DEN,NPHI,PE
Reduction results (Well 1)GR,DT,SP,LLD,LLS,DEN,K
Original results (Well 2)AC,C2,CALI,RINC,PORT, RHOG,SW,VO,WO,PORE,VCL,VMA1, CNL,DEN,GR,RT,RI,RXO,SP,VMA6, VXO,VW,so,rnsy,rsly,rny,AC1, R2M,R025,BZSP,RA2,C1
Reduction results (Well 2)AC,GR,RT,RXO,SP
Decision attribute D = { d } , d = { d i = i , i = 1 , 1 } where −1, 1 represent the dry layer and oil layer, respectively.
Table 6. Range of values for each attribute after attribute reduction.
Table 6. Range of values for each attribute after attribute reduction.
AttributesRange of Values
Well 1Well 2
GR[6, 200][27, 100]
DT[152, 462]/
SP[−167, −68][−32, −6]
LLD[0, 2.5 × 104]/
LLS[0, 3307]/
DEN[1, 4]/
K[0, 5]/
AC/[54, 140]
RT/[2, 90]
RI//
RXO/[1, 328]
NG//
Table 7. Performance of each model on well 1.
Table 7. Performance of each model on well 1.
Proportion of Labeled SamplesEvaluation IndicatorsELMLapSVMSSELMJRSSELMMPA- JRSSELMMSMPA- JRSSELM
10%ACC (%)89.903884.134690.384692.788593.461595.0962
MAE0.20190.37140.19230.14420.13080.0981
RMSE0.63550.79660.62020.53710.51140.4429
20%ACC (%)89.903886.442390.576993.269294.134695.3846
MAE0.20190.27120.18850.13460.11730.0923
RMSE0.63550.73640.61390.51890.48440.4641
Table 8. Performance of each model on well 2.
Table 8. Performance of each model on well 2.
Proportion of Labeled SamplesEvaluation IndicatorsELMLapSVMSSELMJRSSELMMPA- JRSSELMMSMPA- JRSSELM
10%ACC (%)93.582988.235389.661393.761196.791497.3262
MAE0.12830.23520.20680.12480.06420.0535
RMSE0.50660.68600.64310.4995540.35820.3270
20%ACC (%)93.582989.126690.017894.117797.861098.7522
MAE0.12830.21750.19960.11760.04280.0250
RMSE0.50660.65950.63190.48510.29250.2234
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, W.; Xia, K.; Li, T.; Xie, M.; Song, F. A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM. Mathematics 2021, 9, 291. https://doi.org/10.3390/math9030291

AMA Style

Yang W, Xia K, Li T, Xie M, Song F. A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM. Mathematics. 2021; 9(3):291. https://doi.org/10.3390/math9030291

Chicago/Turabian Style

Yang, Wenbiao, Kewen Xia, Tiejun Li, Min Xie, and Fei Song. 2021. "A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM" Mathematics 9, no. 3: 291. https://doi.org/10.3390/math9030291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop