Next Article in Journal
Pinch-Based General Targeting Method for Predicting the Optimal Capital Cost of Heat Exchanger Network
Previous Article in Journal
Processing Strategies for Extraction and Concentration of Bitter Acids and Polyphenols from Brewing By-Products: A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm

School of Electrical and Optoelectronic Engineering, West Anhui University, Lu’an 237012, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(3), 924; https://doi.org/10.3390/pr11030924
Submission received: 14 February 2023 / Revised: 13 March 2023 / Accepted: 16 March 2023 / Published: 17 March 2023

Abstract

:
The input layer, hidden layer, and output layer are three models of the neural processors that make up feedforward neural networks (FNNs). Evolutionary algorithms have been extensively employed in training FNNs, which can correctly actualize any finite training sample set. In this paper, an enhanced marine predators algorithm (MPA) based on the ranking-based mutation operator (EMPA) was presented to train FNNs, and the objective was to attain the minimum classification, prediction, and approximation errors by modifying the connection weight and deviation value. The ranking-based mutation operator not only determines the best search agent and elevates the exploitation ability, but it also delays premature convergence and accelerates the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, and it has sufficient stability and flexibility to acquire the finest solution. To assess the significance and stability of the EMPA, a series of experiments on seventeen distinct datasets from the machine learning repository of the University of California Irvine (UCI) were utilized. The experimental results demonstrated that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs.

1. Introduction

Artificial neural networks are a cross-disciplinary subject that involves neuroscience, brain science, artificial intelligence, computer science, and so on, and they mainly simulate the network structure of human brain neurons to process memory information [1,2,3,4,5]. As an effective and feasible mathematical model, the networks have been employed in various domains, such as pattern recognition, intelligent robots, intelligent control, biomedicine, and function approximation. Feedforward neural networks (FNNs) are one of the relatively prominent methods, which have some characteristics of simple network topology, fault-tolerant distributed storage, massively parallel computation, and strong self-organization and self-adaptability. The purpose of training is to obtain the minimal objective function with connection weights and deviation values in the network according to the collected dataset information, which effectively measure the discrepancy between the predicted values and real values. In recent years, various swarm intelligence methodologies have been used to train feedforward neural networks, such as ant lion optimization (ALO) [6], African vultures optimization algorithm (AVOA) [7], dingo optimization algorithm (DOA) [8], flower pollination algorithm (FPA) [9], moth flame optimization (MFO) [10], salp swarm algorithm (SSA) [11], and sperm swarm optimization (SSO) [12].
Zhang et al. designed an efficient grafting constructive algorithm to train FNNs. This algorithm had faster convergence accuracy and a smaller calculation error [13]. Fan et al., introduced a new backpropagation learning algorithm based on graph regularization to resolve FNNs; this algorithm obtained a better objective value and adjustment parameters [14]. Qu et al. applied a learnable anti-noise receiver algorithm to optimize FNNs; this algorithm had a higher search efficiency and training accuracy [15]. Admon et al. utilized a novel search algorithm to train FNNs and resolve integer order differential equations; this algorithm had a certain practicability and reliability to obtain a high-precision solution [16]. Guo et al. invented an indicator correlation elimination algorithm to optimize FNNs; this algorithm had better training results [17]. Zhang et al. introduced a quantum genetic algorithm to train FNNs; this algorithm had superiority and robustness in obtaining the tuning parameters [18]. Venkatachalapathy et al. utilized FNNs to resolve nonlinear ordinary differential equations; this method had a simple network framework and high convergence accuracy [19]. Liao et al. created a novel deep learning algorithm based on FNNs to resolve the flows of dynamical systems; this algorithm’s efficiency and accuracy were relatively excellent [20]. Shao et al. produced a genetic approach to train optical FNNs; this algorithm had strong integrity and flexibility to satisfy a high classification accuracy [21]. Wu et al. employed the swarm intelligence algorithm to optimize the welding sequence optimization and FNNs; this algorithm eliminated premature convergence and generated the best solution [22]. Raziani et al. combined a modified whale optimization algorithm based on a nonlinear function with FNNs to resolve medical classification problems; this method had faster operation efficiency and better evaluation indexes [23]. Dong et al. designed an efficient and reliable training algorithm to solve FNNs; this algorithm utilized flexibility and stability to obtain a better objective value [24]. Fontes et al. designed a modified constructive algorithm to configure FNNs; this algorithm effectively trained datasets to obtain a higher classification accuracy [25]. Zheng et al. employed the Tschauner–Hempel equation to optimize FNNs; this method had high analytical solutions and good training results [26]. Yılmaz et al. introduced a differential evolution method to train artificial neural networks; this method used a better network structure to obtain the best solution [27]. Luo et al. presented a spotted hyena optimization to optimize FNNs; this algorithm had certain superiority and robustness for obtaining relatively optimal parameters [28]. Askari et al. introduced a political optimization algorithm to train FNNs; the classification accuracy and optimization rate of this algorithm were better [29]. Duman et al. integrated a manta ray foraging optimization algorithm to train FNNs; this algorithm utilized an effective search mechanism to obtain the optimal parameters [30]. Pan et al. employed FNNs to optimize full wave nonlinear inverse scattering; this technique had a faster calculation rate [31]. Wu et al. described a beetle antennae search method to optimize neural networks; this approach had a strong robustness for determining the superior solution [32]. Mahmoud et al. designed a pseudoinverse learning algorithm to train side road convolution neural networks; this algorithm had strong dependability and reliability for achieving the best experimental results [33]. Jamshidi et al. introduced a hybrid echo state network for pattern recognition and classification; this method had faster processing speed and the best optimization results [34]. Khalaj et al. utilized hybrid machine learning techniques and computational mechanics to design oxide precipitation hardened alloys; the method had strong robustness and stability for obtaining a high accuracy [35]. Daneshfar et al. applied an octonion-based nonlinear echo state network for speech emotion recognition in the metaverse; this method had a strong stability to obtain a high calculation accuracy and better optimization performance [36]. Abd Elminaam et al. proposed the marine predators algorithm to resolve feature selection; and the algorithm had better accuracy, sensitivity, and specificity [37]. Zhang et al. presented a domain adaptation network for remaining useful life prediction; this proposed method had a strong stability to determine the best results [38]. Zhang et al. utilized an integrated multitasking intelligent bearing fault diagnosis scheme to realize detection, classification, and fault identification [39]. Zhang et al. proposed an integrated multi-head dual sparse self-attention network for remaining useful life prediction; this method had excellent superiority and robustness [40]. Zhang et al. designed a parallel hybrid neural network for remaining useful life prediction in prognostics; this method had better results [41]. To summarize, evolutionary algorithms have strong robustness, parallelism, and scalability to train FNNs. These algorithms have strong stability and feasibility to determine the objective function value.
The MPA is derived from the universal hunting and gathering mechanisms, particularly Lévy flight, Brownian motion, and the optimal encounter rate policy between the predator and prey [42]. To enhance the availability and practicability, the ranking-based mutation operator was added to the basic MPA, which accelerates the calculation speed and enhances the exploitation to improve the selection probability to mitigate premature convergence. The EMPA was utilized to train FNNs, and the objective was to attain the minimum classification, prediction, and approximation errors by training the FNNs and modifying the connection weight and deviation value. The EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation. The EMPA integrates exploration or exploitation to determine the best solution. The experimental results demonstrated that the EMPA has certain effectiveness and feasibility to achieve a quicker convergence speed and a greater calculation accuracy. Meanwhile, the EMPA has strong stability and robustness for achieving a higher classification rate.
The following sections make up the article. Section 2 covers the mathematical modeling of FNNs. Section 3 explains the MPA. Section 4 shows the EMPA. Section 5 depicts EMPA-based feedforward neural networks. The experimental results and analysis are exhibited in Section 6. Finally, conclusions and future research are illustrated in Section 7.

2. Mathematical Modeling of FNNs

The FNNs are also known as multilayer perception (MLP), which are among the most widely used and rapidly developed artificial neural networks. Each neuron is exclusively coupled to the neuron in the previous layer, and they are arranged in layers. There is no feedback between layers. Three-layer FNNs mainly include the input layer, hidden layer, and output layer, which is shown in Figure 1.
The weighted sum of the input layer is computed as:
s j = i = 1 n ( W i , j X i ) θ j ,   j = 1 , 2 , , h
where n denotes the input nodes, W i j denotes the connection weight from the i t h note of the input layer to the j t h note of the hidden layer, X i denotes the i t h input node, and θ j denotes the deviation value of j t h hidden node.
The output value of each hidden layer is computed as:
S j = s i g m o i d ( s j ) = 1 ( 1 + e x p ( s j ) ) ,   j = 1 , 2 , , h
The values of the output layer are computed as:
o k = j = 1 h ( w j , k S j ) θ k ,   k = 1 , 2 , , m
O k = s i g m o i d ( o k ) = 1 ( 1 + e x p ( o k ) ) ,   k = 1 , 2 , , m
where h denotes the input nodes, w j k denotes the connection weight from the j t h note of the hidden layer to the k t h note of the output layer, S j denotes the j t h input node, and θ k denotes the deviation value of the k t h output node. The connection weight and deviation value are the most important component of FNNs, which determine the final output value.

3. MPA

The MPA utilizes the Lévy flight, Brownian motion, and the optimal encounter rate policy between the predator and prey in marine ecosystems to achieve the best value.

3.1. Initialization

The MPA utilizes a random positioning mechanism to initialize the population and to simulate marine predation. The position is computed as follows:
X 0 = X m i n + r a n d ( X m a x X m i n )
where X m a x and X m i n denote the search space boundary, and r a n d denotes a uniformly distributed randomized number in 0 , 1 .
The Elite matrix is computed as follows:
E l i t e = X 1 , 1 I X 1 , 2 I X 1 , D I X 2 , 1 I X 2 , 2 I X 2 , D I X N , 1 I X N , 2 I X N , D I N × D
where X I denotes the top predator vector, N denotes the population size, and D denotes the spatial dimension.
The Prey matrix is computed as follows:
P r e y = X 1 , 1 X 1 , 2 X 1 , D X 2 , 1 X 2 , 2 X 2 , D X 3 , 1 X 3 , 2 X 3 , D X N , 1 X N , 2 X N , D N × D
where X i j denotes the j t h spatial position of the i t h prey.

3.2. MPA Optimization Scenarios

According to the different velocity ratios of the predator and prey, the MPA is separated into three parts: high-velocity ratio, unit velocity ratio, and low-velocity ratio.
Phase 1: The high-velocity ratio. The predator moves more slowly than the prey. The best foraging strategy is to capture the prey by utilizing Brownian motion and retaining the original position. In this phase, the MPA performs the exploration. The position is computed as follows:
I t e r < 1 3 M a x _ I t e r
s t e p s i z e i = R B ( E l i t e i R B P r e y i ) ,   i = 1 , , N
P r e y i = P r e y i + P R s t e p s i z e i
where s t e p s i z e i denotes a motion step, R B denotes a random walk vector with normal distribution, E l i t e i denotes a top predator matrix, P r e y i denotes a prey matrix, P = 0.5 denotes a constant value, R denotes a randomized vector in 0 , 1 , and denotes entry-wise multiplication.
Phase 2: The unit velocity ratio. The moving velocity of the predator and prey is consistent. In this phase, the MPA gradually transits from exploration to exploitation. The prey is based on Lévy flight, and half of the population quantity is designed for exploitation. The predator is based on Brownian motion, and the other half of the population quantity is designed for exploration. The position is computed as follows:
1 3 M a x _ I t e r < I t e r < 2 3 M a x _ I t e r
For the first half of the population quality, we compute:
s t e p s i z e i = R L ( E l i t e i R L P r e y i ) ,   i = 1 , , N / 2
P r e y i = P r e y i + P R s t e p s i z e i
where R L denotes a randomized vector of Lévy flight.
For the second half of the population quality, we compute:
s t e p s i z e i = R B ( R B E l i t e i P r e y i ) ,   i = N / 2 , , N
P r e y i = E l i t e i + P C F s t e p s i z e i
C F = ( 1 I t e r M a x _ I t e r ) ( 2 I t e r M a x _ I t e r )
where C F denotes a flexible parameter.
Phase 3: The low-velocity ratio. The predator moves more swiftly than the prey. The predator is based on Lévy flight and utilizes exploitation to capture the prey. The position is computed as follows:
I t e r > 2 3 M a x _ I t e r
s t e p s i z e i = R L ( R L E l i t e i P r e y i ) ,   i = 1 , , N
P r e y i = E l i t e i + P C F s t e p s i z e i

3.3. Eddy Formation and FAD’s Effect

The eddy formation and fish aggregating devices (FADs) have a profound impact on the feeding behavior of marine predators, which avoids premature convergence and search stagnation. The position is computed as follows:
P r e y i = P r e y i + C F [ X m i n + R ( X m a x X m i n ) ] U   i f   r F A D s P r e y i + [ F A D s ( 1 r ) + r ] ( P r e y r 1 P r e y r 2 )   i f   r > F A D s
where F A D s = 0.2 denotes a probability of the FADs effect, U denotes a binary vector with arrays containing zero and one, r denotes a randomized number in 0 , 1 , and r 1 and r 2 denote randomized indexes of the prey matrix.
To precisely clarify the solution process, the pseudocode of the MPA is expressed in Algorithm 1.
Algorithm 1: MPA
Begin
Step 1. Initialize the marine predator population X i ( i = 1 , 2 , , N ) and control parameters
Step 2. Assess the fitness value of each predator
   Discover the ideal predator
Step 3. while ( I t e r < M a x _ I t e r ) do
    for each predator
    Construct the Elite and prey matrices via Equations (6) and (7)
    If  ( I t e r < 1 3 M a x _ I t e r )
     Renew prey via Equations (9) and (10)
    Else if  1 3 M a x _ I t e r < I t e r < 2 3 M a x _ I t e r
     For the first half of the population quality ( i = 1 , , N / 2 )
     Renew prey via Equations (12) and (13)
     For the other half of the population quality (   i = N / 2 , , N )
     Renew prey via Equations (14) and (15)
    Else if  I t e r > 2 3 M a x _ I t e r
     Renew prey via Equations (18) and (19)
    End if
    Identify and amend any predator that travels beyond the search scope
    Complete memory conserving and Elite Renew
    Utilizing F A D s effect and renewing prey via Equation (20)
     I t e r = I t e r + 1
    Return the best predator
End

4. EMPA

The ranking-based mutation operator filters out the best marine predator to avoid search stagnation and to enhance exploitation ability [43]. The ranking of fitness values from best to worst is computed as follows:
R i = N i ,       i = 1 , 2 , , N
where N is the population size. The optimal predator has the best ranking, and the selection probability P i of the i t h predator is calculated as follows:
p i = R i N ,   i = 1 , 2 , , N
The ranking-based mutation operator “DE/rand/1” is given in Algorithm 2. The aquatic predator with the highest ranking is more likely to be allocated as a terminal vector or vector, and the genetic information is transmitted to the offspring. If both differential mutation vectors are from the higher-order vector, the operator’s search step may drastically reduce and avoid premature convergence.
Algorithm 2: Ranking-based mutation operator of “DE/rand/1”
Begin
Sort the population, and assign the ranking and selection probability P i for each predator
Randomly select r 1 1 , N {base vector index}
while r a n d > p r 1   o r   r 1 = = i
Randomly select r 1 1 , N
end
Randomly select r 2 1 , N {terminal vector index}
while  r a n d > p r 2   o r   r 2 = = r 1   o r   r 2 = = i
Randomly select r 2 1 , N
end
Randomly select r 3 1 , N {starting vector index}
while  r 3 = = r 2   o r   r 3 = = r 1   o r   r 3 = = i
Randomly select r 3 1 , N
end
End
The EMPA can balance the exploration and exploitation to improve the convergence speed and calculation accuracy. The pseudocode of the EMPA is expressed in Algorithm 3.
Algorithm 3: EMPA
Begin
Step 1. Initialize the marine predator population X i ( i = 1 , 2 , , N ) and control parameters
Step 2. Assess the fitness value of each predator
Discover the ideal predator
Step 3. while ( I t e r < M a x _ I t e r ) do
for each predator
   Sort the population; assign the ranking and selection probability P i for each predator /*ranking-based mutation stage*/
   Randomly select r 1 1 , N {base vector index}
   while  r a n d > p r 1   o r   r 1 = = i
   Randomly select r 1 1 , N
   end
 Randomly select r 2 1 , N {terminal vector index}
while r a n d > p r 2   o r   r 2 = = r 1   o r   r 2 = = i
   Randomly select r 2 1 , N
   end
 Randomly select r 3 1 , N {starting vector index}
while r 3 = = r 2   o r   r 3 = = r 1   o r   r 3 = = i
   Randomly select r 3 1 , N
   end /*end of ranking-based mutation stage*/
Construct the Elite and prey matrices via Equations (6) and (7)
If ( I t e r < 1 3 M a x _ I t e r )
Renew prey via Equations (9) and (10)
Else if 1 3 M a x _ I t e r < I t e r < 2 3 M a x _ I t e r
For the first half of the population quality ( i = 1 , , N / 2 )
Renew prey via Equations (12) and (13)
For the other half of the population quality (   i = N / 2 , , N )
Renew prey via Equations (14) and (15)
Else if I t e r > 2 3 M a x _ I t e r
Renew prey via Equations (18) and (19)
End if
Identify and amend any predator that travels beyond the search scope
Complete memory conserving and Elite Renew
Utilizing F A D s effect and renewing prey via Equation (20)
I t e r = I t e r + 1
Return the best predator
End

5. EMPA-Based Feedforward Neural Networks

The intention of training the FNNs is not only to acquire the global optimal solution for the given input value, but also to identify the best combination of the connection weight and deviation value. The FNNs with vector mechanisms are computed as follows:
V = W , θ = W 1 , 1 , W 1 , 2 , , W n , n , h , θ 1 , θ 2 , , θ h
The mean squared error (MSE) is used as an evaluation index to estimate the expected output and actual output, which classifies and predicts all training samples in the datasets. The MSE is computed as follows:
M S E = i = 1 m ( o i k d i k ) 2
where m denotes the output size, d i k denotes the expected value of the i t h input unit according to the k t h training sample, and o i k denotes the actual value of the i t h input unit according to the k t h training sample.
The data set contains numerous training samples, and each sample needs to be evaluated by FNNs. The average value of the MSE is computed as follows:
M S E ¯ = k = 1 s i = 1 m ( o i k d i k ) 2 s
where s denotes the training sample size.
The fitness value of training the FNNs is computed as follows:
M i n i m i z e : F ( V ¯ ) = M S E ¯
The correlation between the issue scope and the EMPA scope is revealed in Table 1. The EMPA-based feedforward neural networks are expressed in Algorithm 4. The flowchart of the EMPA for FNNs is shown in Figure 2.
Algorithm 4: EMPA-based feedforward neural networks
Begin
Step 1. Initialize the marine predator population X i ( i = 1 , 2 , , N ) , control parameters, and the structure of FNNs; each predator denotes the connection weight and deviation value
Step 2. Assess the fitness value of each predator via Equation (24); assign connection weight to the predator
Discover the ideal predator
Step 3. while ( I t e r < M a x _ I t e r ) do
for each predator
   Sort the population; assign the ranking and selection probability P i for each predator /*ranking-based mutation stage*/
   Randomly select r 1 1 , N {base vector index}
   while r a n d > p r 1   o r   r 1 = = i
   Randomly select r 1 1 , N
   end
   Randomly select r 2 1 , N {terminal vector index}
   while r a n d > p r 2   o r   r 2 = = r 1   o r   r 2 = = i
   Randomly select r 2 1 , N
   end
 Randomly select r 3 1 , N {starting vector index}
while r 3 = = r 2   o r   r 3 = = r 1   o r   r 3 = = i
   Randomly select r 3 1 , N
   end /*end of ranking-based mutation stage*/
Construct the Elite and prey matrices via Equations (6) and (7)
If ( I t e r < 1 3 M a x _ I t e r )
Renew prey via Equations (9) and (10)
Else if 1 3 M a x _ I t e r < I t e r < 2 3 M a x _ I t e r
For the first half of the population quality ( i = 1 , , N / 2 )
Renew prey via Equations (12) and (13)
For the other half of the population quality (   i = N / 2 , , N )
Renew prey via Equations (14) and (15)
Else if I t e r > 2 3 M a x _ I t e r
Renew prey via Equations (18) and (19)
End if
Identify and amend any predator that travels beyond the search scope
Complete memory conserving and Elite Renew
Utilizing F A D s effect and renewing prey via Equation (20), and assessing the fitness value of each predator via Equation (24)
I t e r = I t e r + 1
Return the best predator
End

Complexity Analysis

In this section, both the time and space complexity of the EMPA-based feedforward neural networks are analyzed.
Time complexity: The EMPA-based feedforward neural networks mainly contain four steps: initialization, EMPA optimization scenarios (Phase 1—high-velocity ratio Phase 2—unit velocity ratio, Phase 3—low-velocity ratio), eddy formation and FAD’s effect, and halting judgment. The population size is N , the maximum iteration is M a x _ I t e r , and the problem dimension is D . The time complexity of initialization is O ( N D ) . The time complexity of the EMPA optimization scenarios is O ( N D M a x _ I t e r ) . The time complexity of the eddy formation and FADs’ effect is O ( M a x _ I t e r ) . The time complexity of the halting judgment is O ( 1 ) . Thus, the total time complexity of the EMPA-based feedforward neural networks is O ( N D M a x _ I t e r ) .
Space complexity: the amount of extra storage in an algorithm is viewed as a measure of space complexity. The population size is N and the problem dimension is D . The EMPA utilizes N search agents to calculate the space complexity. Therefore, the total space complexity of the EMPA-based feedforward neural networks is O ( N D ) , and the space efficiency of the EMPA is effective and stable.

6. Experimental Results and Analysis

6.1. Experimental Setup

The numerical experiment was implemented on a computer with an Intel Core i9-12900HX 2.30 GHz CPU, RTX 3080 Ti, and 64 GB memory with Windows 11 system. All algorithms were programmed in MATLAB R2018b.

6.2. Test Datasets

The test datasets are from the machine learning repository of the University of California Irvine (UCI), which were used to evaluate the stability and robustness of the MPA. The details of the datasets are revealed in Table 2.

6.3. Parameter Setting

To establish viability and suitability, the MPA was contrasted with other algorithms that contained the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA. The control parameters were indicative experimental values that were taken from the source publications. The initial parameters of all algorithms are revealed in Table 3.

6.4. Results and Analysis

For each algorithm, the population size was 30, the maximum iteration was 500, and the independent run was 20. Best, Worst, Mean and Std denote the optimal value, worst value, mean value, and standard deviation, respectively. Accuracy denotes the classification rate, and the ranking is based on accuracy. These evaluation indexes can comprehensively reflect the overall reliability and superiority of each algorithm.
The experimental results of multiple datasets are revealed in Table 4. Different algorithms utilized different datasets to train the feedforward neural networks, and the purpose was to minimize the gap between anticipated output and actual output by modifying the connection weight and deviation value. To verify the effectiveness and feasibility, the EMPA was compared with other algorithms by training massive datasets. For the blood and scale datasets, the optimal values, worst values, mean values and standard deviations of the EMPA were superior to those of the ALO, AVOA, DOA, FPA, SSA, SSO, and MPA. The classification rate and ranking of the EMPA were the highest, which indicates that the EMPA appropriately modifies the traversal mechanism to arrive at the overall optimum solution. For survival, liver, and statlog datasets, when compared with the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, the optimal values, worst values, and mean values of the EMPA were superior, and the standard deviations, classification rate, and ranking of the EMPA were comparatively greater, which indicates that the EMPA provides superiority and feasibility to integrate the exploration and exploitation, as well as to obtain the best solution. For the XOR, balloon, splice, and zoo datasets, all evaluation indexes of the EMPA were superior to those of the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, which indicates that the EMPA utilizes the unique predatory mechanism and position update mechanism to avoid search stagnation and obtain the accurate solutions. For the seeds, wine, iris, cancer, diabetes, gene, parkinson, WDBC datasets, all evaluation indexes of the EMPA were better than those of the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, which indicates that the EMPA utilizes some advantages and characteristics to achieve parameter adjustment and traversal search. All classification rates and ranking of the EMPA were better compared to the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA. These comparison algorithms achieve the balance between exploration and exploitation by adjusting control parameters to a certain extent, but they easily fall into local optimum and premature convergences to yield a slow convergence speed, low calculation accuracy, and worse classification rate. The EMPA, based on the marine predators foraging strategy, utilizes a distinctive optimization mechanism of Lévy flight, Brownian motion, and the optimal encounter rate policy to capture the prey in the marine ecosystem. The EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation. The EMPA not only successfully balances exploration and exploitation to eliminate search stagnation and slow convergence, but it also efficiently traverses the entire search space to modify parameters and identify the ideal solution. In summary, the EMPA has significant resilience and stability to efficiently train the feedforward neural networks.
The Wilcoxon rank–sum test was actualized to distinguish the EMPA and other algorithms [44]. p < 0.05 indicates that the discrepancy is noteworthy, p 0.05 indicates that the discrepancy is not noteworthy, and N/A indicates that a “not applicable” discrepancy. The results of the p-value Wilcoxon rank–sum test are revealed in Table 5. The experimental results indicate that the discrepancy between EMPA and other algorithms was noteworthy.
The convergent curves of the EMPA and other algorithms under different datasets are shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19. The convergent curve is an important method to measure the overall optimization and traversal search, which not only intuitively reflects the convergence rate and computation accuracy of the feedforward neural networks trained by the EMPA and other algorithms, but also objectively observes the iteration process, as well as the stability and feasibility of different algorithms. For the blood, scale, survival, liver, seeds, wine, iris, statlog, XOR, balloon, cancer, diabetes, gene, parkinson, splice, WDBC and zoo datasets, compared with the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, the evaluation indexes of the EMPA were relatively better in optimal values, worst values, mean values, and standard deviations. The classification rate and ranking of the EMPA were superior to those of other algorithms. The convergence rate and computation accuracy of the EMPA were superior to those of the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, which indicates that the EMPA has remarkable feasibility and resilience to eliminate search stagnation and acquire the connection weight and deviation value. The optimal values and convergence effect of the EMPA were superior to those of other algorithms under different datasets. The EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation. The EMPA integrates exploration and exploitation to renew the position information and arrive at the global ideal solution. The EMPA is a practical and efficient method for training feedforward neural networks.
The ANOVA tests of the EMPA and other algorithms under different datasets are shown in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35 and Figure 36. The standard deviation is an important method to measure the dispersion degree of data average values, which can accurately portray the stability and consistency of comparison algorithms in resolving the feedforward neural networks. The lower standard deviation showed that the algorithm has extensive exploration and exploitation to acquire more stable experimental data. For different datasets, the standard deviations of the EMPA were lower than those of the ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA, which indicates that the EMPA has exceptional stability and durability. The EMPA had greater computational efficiency and stronger dependability to attain a more stable standard deviation. The optimal values, worst values, mean values, classification rate and ranking of the EMPA were relatively better compared to ALO, AVOA, DOA, FPA, MFO, SSA, SSO, and MPA. The EMPA, based on the marine predators foraging strategy, utilizes a distinctive optimization mechanism of Lévy flight, Brownian motion, and the optimal encounter rate policy to determine the global optimal solution. The EMPA has strong global and local search abilities to avoid search stagnation and premature convergence, which enhances the convergence effect and optimization ability. The EMPA has strong stability and robustness to train the feedforward neural networks. Meanwhile, The EMPA has a certain superiority and significance for receiving the better connection weight and deviation value.
Statistically, the EMPA is based on the marine predators foraging strategy to imitate Lévy flight, Brownian motion, and the optimal encounter rate policy to arrive at the overall best solution. The EMPA was employed to resolve FNNs for the following reasons. First, the EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability, and easy implementation. Second, the EMPA utilizes the Lévy flight, Brownian motion, and the optimal encounter rate policy to determine the best solution. The Lévy flight can increase the population diversity, expand the search space, enhance the exploitation ability, and improve the calculation accuracy. The Brownian motion and optimal encounter rate policy can filter out the best solution, avoid search stagnation, enhance the exploration ability, and accelerate the convergence speed. Third, the ranking-based mutation operator was introduced into the MPA. The EMPA not only balances exploration and exploitation to avoid falling into the local optimum and premature convergence, but it also utilizes a unique search mechanism to renew the position and identify the best solution. To summarize, the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, and strong stability and robustness. The EMPA has a strong overall optimization ability to train FNNs.

7. Conclusions and Future Research

In this paper, an enhanced MPA based on the ranking-based mutation operator was presented to train FNNs, and the objective was not only to determine the best combination of connection weight and deviation value, but also to acquire the global best solution according to the given input value. The ranking-based mutation operator not only enhanced the selection probability to filter out the optimal search agent, but also mitigated search stagnation to accelerate convergence speed. The EMPA utilized the distinctive mechanisms of Lévy flight, Brownian motion, the optimal encounter rate policy, and the ranking-based mutation operator to attain the minimum classification, prediction and approximation errors. The EMPA had strong robustness, parallelism, and scalability to determine the best value. Compared with the other algorithms, the EMPA had excellent reliability and superiority to train FNNs. The experimental results demonstrate that the convergence speed, calculation accuracy and classification rate of the EMPA were superior to those of the other algorithms. Furthermore, the EMPA had strong practicability and feasibility for training FNNs.
In future research, we will utilize the DL methods, ML methods, and CNN. We will modify the activation function, such as RELU and sRELU. We will employ the random forest, XGBBOST, KNN, and FNN with other optimization algorithms. The EMPA will be utilized to resolve complex optimization problems, such as intelligent vehicle path planning, intelligent-temperature-controlled self-adjusting electric fans, and sensor information fusion.

Author Contributions

Conceptualization, J.Z. and Y.X.; methodology, J.Z. and Y.X..; software, J.Z. and Y.X.; validation, J.Z. and Y.X.; formal analysis, J.Z.; investigation, Y.X.; resources, J.Z. and Y.X.; data curation, J.Z. and Y.X.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z. and Y.X.; visualization, J.Z. and Y.X.; supervision, Y.X.; project administration, J.Z. and Y.X.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Start-up Fee for Scientific Research of High-level Talents in 2022 under Grant No. 00701092336.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used to support the findings of this study are included within the article.

Acknowledgments

This research was funded by the Start-up Fee for Scientific Research of High-level Talents in 2022 under Grant No. 00701092336. The authors would like to thank everyone involved for their contribution to this article. They also would like to thank the editor and anonymous reviewers for the helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FNNsFeedforward Neural Networks
MPAMarine Predators Algorithm
EMPAEnhanced Marine Predators Algorithm
UCIUniversity of California Irvine
ALOAnt Lion Optimization
AVOAAfrican Vultures Optimization Algorithm
DOADingo Optimization Algorithm
FPAFlower Pollination Algorithm
MFOMoth Flame Optimization
SSASalp Swarm Algorithm
SSOSperm Swarm Optimization
MLPMultilayer Perception
MSEMean Squared Error
N/ANot Applicable

References

  1. Üstün, O.; Bekiroğlu, E.; Önder, M. Design of highly effective multilayer feedforward neural network by using genetic algorithm. Expert Syst. 2020, 37, e12532. [Google Scholar] [CrossRef]
  2. Yin, Y.; Tu, Q.; Chen, X. Enhanced Salp Swarm Algorithm based on random walk and its application to training feedforward neural networks. Soft Comput. 2020, 24, 14791–14807. [Google Scholar] [CrossRef]
  3. Troumbis, I.A.; Tsekouras, G.E.; Tsimikas, J.; Kalloniatis, C.; Haralambopoulos, D. A Chebyshev polynomial feedforward neural network trained by differential evolution and its application in environmental case studies. Environ. Model. Softw. 2020, 126, 104663. [Google Scholar] [CrossRef]
  4. Al-Majidi, S.D.; Abbod, M.F.; Al-Raweshidy, H.S. A particle swarm optimisation-trained feedforward neural network for predicting the maximum power point of a photovoltaic array. Eng. Appl. Artif. Intell. 2020, 92, 103688. [Google Scholar] [CrossRef]
  5. Truong, T.T.; Dinh-Cong, D.; Lee, J.; Nguyen-Thoi, T. An effective deep feedforward neural networks (DFNN) method for damage identification of truss structures using noisy incomplete modal data. J. Build. Eng. 2020, 30, 101244. [Google Scholar] [CrossRef]
  6. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  7. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  8. Bairwa, A.K.; Joshi, S.; Singh, D. Dingo optimizer: A nature-inspired metaheuristic approach for engineering problems. Math. Probl. Eng. 2021, 2021, 2571863. [Google Scholar] [CrossRef]
  9. Yang, X.-S.; Karamanoglu, M.; He, X. Flower pollination algorithm: A novel approach for multiobjective optimization. Eng. Optim. 2014, 46, 1222–1237. [Google Scholar] [CrossRef] [Green Version]
  10. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  12. Eslami, M.; Babaei, B.; Shareef, H.; Khajehzadeh, M.; Arandian, B. Optimum design of damping controllers using modified Sperm swarm optimization. IEEE Access 2021, 9, 145592–145604. [Google Scholar] [CrossRef]
  13. Zhang, S.; Xie, L. Grafting constructive algorithm in feedforward neural network learning. Appl. Intell. 2022, 1–18. [Google Scholar] [CrossRef]
  14. Fan, Y.; Yang, W. A Backpropagation Learning Algorithm with Graph Regularization for Feedforward Neural Networks. Inf. Sci. 2022, 607, 263–277. [Google Scholar] [CrossRef]
  15. Qu, Z.; Liu, X.; Sun, L. Learnable antinoise-receiver algorithm based on a quantum feedforward neural network in optical quantum communication. Phys. Rev. A 2022, 105, 052427. [Google Scholar] [CrossRef]
  16. Admon, M.R.; Senu, N.; Ahmadian, A.; Majid, Z.A.; Salahshour, S. A new efficient algorithm based on feedforward neural network for solving differential equations of fractional order. Commun. Nonlinear Sci. Numer. Simul. 2022, 177, 106968. [Google Scholar] [CrossRef]
  17. Guo, W.; Qiu, H.; Liu, Z.; Zhu, J.; Wang, Q. An integrated model based on feedforward neural network and Taylor expansion for indicator correlation elimination. Intell. Data Anal. 2022, 26, 751–783. [Google Scholar] [CrossRef]
  18. Zhang, G. Research on safety simulation model and algorithm of dynamic system based on artificial neural network. Soft Comput. 2022, 26, 7377–7386. [Google Scholar] [CrossRef]
  19. Venkatachalapathy, P.; Mallikarjunaiah, S.M. A feedforward neural network framework for approximating the solutions to nonlinear ordinary differential equations. Neural Comput. Appl. 2022, 35, 1661–1673. [Google Scholar] [CrossRef]
  20. Liao, G.; Zhang, L. Solving flows of dynamical systems by deep neural networks and a novel deep learning algorithm. Math. Comput. Simul. 2022, 202, 331–342. [Google Scholar] [CrossRef]
  21. Shao, R.; Zhang, G.; Gong, X. Generalized robust training scheme using genetic algorithm for optical neural networks with imprecise components. Photon. Res. 2022, 10, 1868–1876. [Google Scholar] [CrossRef]
  22. Wu, C.; Wang, C.; Kim, J.-W. Welding sequence optimization to reduce welding distortion based on coupled artificial neural network and swarm intelligence algorithm. Eng. Appl. Artif. Intell. 2022, 114, 105142. [Google Scholar] [CrossRef]
  23. Raziani, S.; Ahmadian, S.; Jalali, S.M.J.; Chalechale, A. An Efficient Hybrid Model Based on Modified Whale Optimization Algorithm and Multilayer Perceptron Neural Network for Medical Classification Problems. J. Bionic Eng. 2022, 19, 1504–1521. [Google Scholar] [CrossRef]
  24. Dong, Z.; Huang, H. A training algorithm with selectable search direction for complex-valued feedforward neural networks. Neural Netw. 2021, 137, 75–84. [Google Scholar] [CrossRef]
  25. Fontes, C.H.; Embiruçu, M. An approach combining a new weight initialization method and constructive algorithm to configure a single Feedforward Neural Network for multi-class classification. Eng. Appl. Artif. Intell. 2021, 106, 104495. [Google Scholar] [CrossRef]
  26. Zheng, M.; Luo, J.; Dang, Z. Feedforward neural network based time-varying state-transition-matrix of Tschauner-Hempel equations. Adv. Space Res. 2022, 69, 1000–1011. [Google Scholar] [CrossRef]
  27. Yılmaz, O.; Bas, E.; Egrioglu, E. The training of Pi-Sigma artificial neural networks with differential evolution algorithm for forecasting. Comput. Econ. 2022, 59, 1699–1711. [Google Scholar] [CrossRef]
  28. Luo, Q.; Li, J.; Zhou, Y.; Liao, L. Using spotted hyena optimizer for training feedforward neural networks. Cogn. Syst. Res. 2021, 65, 1–16. [Google Scholar] [CrossRef]
  29. Askari, Q.; Younas, I. Political optimizer based feedforward neural network for classification and function approximation. Neural Process. Lett. 2021, 53, 429–458. [Google Scholar] [CrossRef]
  30. Duman, S.; Dalcalı, A.; Özbay, H. Manta ray foraging optimization algorithm–based feedforward neural network for electric energy consumption forecasting. Int. Trans. Electr. Energy Syst. 2021, 31, e12999. [Google Scholar] [CrossRef]
  31. Pan, X.-M.; Song, B.-Y.; Wu, D.; Wei, G.; Sheng, X.-Q. On Phase Information for Deep Neural Networks to Solve Full-Wave Nonlinear Inverse Scattering Problems. IEEE Antennas Wirel. Propag. Lett. 2021, 20, 1903–1907. [Google Scholar] [CrossRef]
  32. Wu, Q.; Chen, Z.; Chen, D.; Li, S. Beetle antennae search strategy for neural network model optimization with application to glomerular filtration rate estimation. Neural Process. Lett. 2021, 53, 1501–1522. [Google Scholar] [CrossRef]
  33. Mahmoud, M.A.B.; Guo, P.; Fathy, A.; Li, K. SRCNN-PIL: Side Road Convolution Neural Network Based on Pseudoinverse Learning Algorithm. Neural Process. Lett. 2021, 53, 4225–4237. [Google Scholar] [CrossRef]
  34. Jamshidi, M.B.; Daneshfar, F. A Hybrid Echo State Network for Hypercomplex Pattern Recognition, Classification, and Big Data Analysis. In Proceedings of the 2022 12th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 17–18 November 2022; IEEE: Piscataway, NJ, USA; pp. 7–12. [Google Scholar]
  35. Khalaj, O.; Jamshidi, M.B.; Saebnoori, E.; Masek, B.; Stadler, C.; Svoboda, J. Hybrid machine learning techniques and computational mechanics: Estimating the dynamic behavior of oxide precipitation hardened steel. IEEE Access 2021, 9, 156930–156946. [Google Scholar] [CrossRef]
  36. Daneshfar, F.; Jamshidi, M.B. An Octonion-Based Nonlinear Echo State Network for Speech Emotion Recognition in Metaverse. SSRN Electron. J. 2022, 4242011. [Google Scholar]
  37. Elminaam, D.S.A.; Nabil, A.; Ibraheem, S.A.; Houssein, E.H. An efficient marine predators algorithm for feature selection. IEEE Access 2021, 9, 60136–60153. [Google Scholar] [CrossRef]
  38. Zhang, J.; Li, X.; Tian, J.; Jiang, Y.; Luo, H.; Yin, S. A variational local weighted deep sub-domain adaptation network for remaining useful life prediction facing cross-domain condition. Reliab. Eng. Syst. Saf. 2023, 231, 108986. [Google Scholar] [CrossRef]
  39. Zhang, J.; Zhang, K.; An, Y.; Luo, H.; Yin, S. An Integrated Multitasking Intelligent Bearing Fault Diagnosis Scheme Based on Representation Learning Under Imbalanced Sample Condition. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–12. [Google Scholar] [CrossRef]
  40. Zhang, J.; Li, X.; Tian, J.; Luo, H.; Yin, S. An integrated multi-head dual sparse self-attention network for remaining useful life prediction. Reliab. Eng. Syst. Saf. 2023, 233, 109096. [Google Scholar] [CrossRef]
  41. Zhang, J.; Tian, J.; Li, M.; Leon, J.I.; Franquelo, L.G.; Luo, H.; Yin, S. A parallel hybrid neural network with integration of spatial and temporal features for remaining useful life prediction in prognostics. IEEE Trans. Instrum. Meas. 2022, 72, 1–12. [Google Scholar] [CrossRef]
  42. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  43. Yan, Z.; Zhang, J.; Zeng, J.; Tang, J. Nature-inspired approach: An enhanced whale optimization algorithm for global optimization. Math. Comput. Simul. 2021, 185, 17–46. [Google Scholar] [CrossRef]
  44. Bridge, P.D.; Sawilowsky, S.S. Increasing physicians’ awareness of the impact of statistics on research outcomes: Comparative power of the t-test and Wilcoxon rank-sum test in small samples applied research. J. Clin. Epidemiol. 1999, 52, 229–235. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Three-layer feedforward neural networks.
Figure 1. Three-layer feedforward neural networks.
Processes 11 00924 g001
Figure 2. Flowchart of EMPA for FNNs.
Figure 2. Flowchart of EMPA for FNNs.
Processes 11 00924 g002
Figure 3. The convergent curves of Blood.
Figure 3. The convergent curves of Blood.
Processes 11 00924 g003
Figure 4. The convergent curves of Scale.
Figure 4. The convergent curves of Scale.
Processes 11 00924 g004
Figure 5. The convergent curves of Survival.
Figure 5. The convergent curves of Survival.
Processes 11 00924 g005
Figure 6. The convergent curves of Liver.
Figure 6. The convergent curves of Liver.
Processes 11 00924 g006
Figure 7. The convergent curves of Seeds.
Figure 7. The convergent curves of Seeds.
Processes 11 00924 g007
Figure 8. The convergent curves of Wine.
Figure 8. The convergent curves of Wine.
Processes 11 00924 g008
Figure 9. The convergent curves of Iris.
Figure 9. The convergent curves of Iris.
Processes 11 00924 g009
Figure 10. The convergent curves of Statlog.
Figure 10. The convergent curves of Statlog.
Processes 11 00924 g010
Figure 11. The convergent curves of XOR.
Figure 11. The convergent curves of XOR.
Processes 11 00924 g011
Figure 12. The convergent curves of Balloon.
Figure 12. The convergent curves of Balloon.
Processes 11 00924 g012
Figure 13. The convergent curves of Cancer.
Figure 13. The convergent curves of Cancer.
Processes 11 00924 g013
Figure 14. The convergent curves of Diabetes.
Figure 14. The convergent curves of Diabetes.
Processes 11 00924 g014
Figure 15. The convergent curves of Gene.
Figure 15. The convergent curves of Gene.
Processes 11 00924 g015
Figure 16. The convergent curves of Parkinson.
Figure 16. The convergent curves of Parkinson.
Processes 11 00924 g016
Figure 17. The convergent curves of Splice.
Figure 17. The convergent curves of Splice.
Processes 11 00924 g017
Figure 18. The convergent curves of WDBC.
Figure 18. The convergent curves of WDBC.
Processes 11 00924 g018
Figure 19. The convergent curves of Zoo.
Figure 19. The convergent curves of Zoo.
Processes 11 00924 g019
Figure 20. The ANOVA test of Blood.
Figure 20. The ANOVA test of Blood.
Processes 11 00924 g020
Figure 21. The ANOVA test of Scale.
Figure 21. The ANOVA test of Scale.
Processes 11 00924 g021
Figure 22. The ANOVA test of Survival.
Figure 22. The ANOVA test of Survival.
Processes 11 00924 g022
Figure 23. The ANOVA test of Liver.
Figure 23. The ANOVA test of Liver.
Processes 11 00924 g023
Figure 24. The ANOVA test of Seeds.
Figure 24. The ANOVA test of Seeds.
Processes 11 00924 g024
Figure 25. The ANOVA test of Wine.
Figure 25. The ANOVA test of Wine.
Processes 11 00924 g025
Figure 26. The ANOVA test of Iris.
Figure 26. The ANOVA test of Iris.
Processes 11 00924 g026
Figure 27. The ANOVA test of Statlog.
Figure 27. The ANOVA test of Statlog.
Processes 11 00924 g027
Figure 28. The ANOVA test of XOR.
Figure 28. The ANOVA test of XOR.
Processes 11 00924 g028
Figure 29. The ANOVA test of Balloon.
Figure 29. The ANOVA test of Balloon.
Processes 11 00924 g029
Figure 30. The ANOVA test of Cancer.
Figure 30. The ANOVA test of Cancer.
Processes 11 00924 g030
Figure 31. The ANOVA test of Diabetes.
Figure 31. The ANOVA test of Diabetes.
Processes 11 00924 g031
Figure 32. The ANOVA test of Gene.
Figure 32. The ANOVA test of Gene.
Processes 11 00924 g032
Figure 33. The ANOVA test of Parkinson.
Figure 33. The ANOVA test of Parkinson.
Processes 11 00924 g033
Figure 34. The ANOVA test of Splice.
Figure 34. The ANOVA test of Splice.
Processes 11 00924 g034
Figure 35. The ANOVA test of WDBC.
Figure 35. The ANOVA test of WDBC.
Processes 11 00924 g035
Figure 36. The ANOVA test of Zoo.
Figure 36. The ANOVA test of Zoo.
Processes 11 00924 g036
Table 1. Correlation between issue scope and EMPA scope.
Table 1. Correlation between issue scope and EMPA scope.
Issue ScopeEMPA Scope
A set scheme ( P 1 , P 2 , , P N ) to tackle the FNNsA marine predator population ( X 1 , X 2 , , X N )
The optimal scheme to obtain the best solution The marine predator or search agent
The evaluation value of FNNsThe fitness value of EMPA
Table 2. The details of the datasets.
Table 2. The details of the datasets.
DatasetsAttributeClassTrainingTestingInputHiddenOutput
Blood42493255492
Scale43412213493
Survival32202104372
Liver622271186132
Seeds73139717153
Wine1331176113273
Iris439951493
Statlog1321789213272
XOR3244372
Balloon421010492
Cancer925991009192
Diabetes825072618172
Gene5727036571152
Parkinson2221296622452
Splice602660340601212
WDBC30239416530612
Zoo167673416337
Table 3. Initial parameters of all algorithms.
Table 3. Initial parameters of all algorithms.
AlgorithmsParametersValues
ALOUnpredictable value r a n d [0,1]
Constant number w 5
AVOARandomized number L 1 [0,1]
Randomized number L 2 [0,1]
Randomized number z [−1,1]
Randomized number h [−2,2]
Randomized number r a n d [0,1]
Randomized number u [0,1]
Randomized number v [0,1]
Constant number β 1.5
DOARandomized vector a 1 [0,1]
Randomized vector a 2 [0,1]
Coefficient vector A (1,0)
Coefficient vector B (1,1)
Randomized number b (0,3)
FPA Switch probability ρ 0.8
Step size λ 1.5
Randomized number ε [0,1]
MFO Constant number b 1
Randomized number t [−1,1]
Randomized number r [−2,−1]
SSA Randomized number c 2 [0,1]
Randomized number c 3 [0,1]
SSOVelocity damping factor D [0,1]
Randomized number p h _ R a n d 1 [7,14]
Randomized number p h _ R a n d 2 [7,14]
Randomized number p h _ R a n d 3 [7,14]
MPAUniform randomized number r a n d [0,1]
Uniform randomized number R [0,1]
Constant number P 0.5
Probability of F A D s effect 0.2
Binary vector U [0,1]
Randomized number r [0,1]
MPAUniform randomized number r a n d [0,1]
Uniform randomized number R [0,1]
Constant number P 0.5
Probability of F A D s effect 0.2
Binary vector U [0,1]
Randomized number r [0,1]
Scaling factor F 0.7
Table 4. Experimental results of multiple datasets.
Table 4. Experimental results of multiple datasets.
DatasetsResultALOAVOADOAFPAMFOSSASSOMPAEMPA
BloodBest3.07 × 10−12.99 × 10−13.18 × 10−13.09 × 10−12.99 × 10−13.06 × 10−13.39 × 10−13.02 × 10−12.96 × 10−1
Worst3.21 × 10−13.58 × 10−13.66 × 10−13.22 × 10−13.08 × 10−13.95 × 10−13.64 × 10−13.07 × 10−13.06 × 10−1
Mean3.15 × 10−13.17 × 10−13.47 × 10−13.16 × 10−13.03 × 10−13.26 × 10−13.50 × 10−13.05 × 10−13.02 × 10−1
Std4.25 × 10−31.55 × 10−21.49 × 10−22.99 × 10−32.07 × 10−32.21 × 10−27.50 × 10−31.53 × 10−32.31 × 10−3
Accuracy80.3980.3979.6180808077.2581.5785.21
Rank335444621
ScaleBest1.39 × 10−11.21 × 10−12.86 × 10−11.56 × 10−11.20 × 10−11.28 × 10−12.27 × 10−11.05 × 10−19.74× 10−2
Worst1.86 × 10−12.99 × 10−16.42 × 10−11.85 × 10−11.57 × 10−12.13 × 10−14.79 × 10−11.79 × 10−11.57 × 10−1
Mean1.60 × 10−11.80 × 10−14.69 × 10−11.68 × 10−11.41 × 10−11.59 × 10−13.63 × 10−11.33 × 10−11.23 × 10−1
Std1.18 × 10−25.18 × 10−29.35 × 10−27.18 × 10−39.75 × 10−32.24 × 10−27.96 × 10−22.14 × 10−21.68 × 10−2
Accuracy89.2086.8579.3488.0387.3290.1485.9290.6192.02
Rank479563821
SurvivalBest3.60 × 10−13.46 × 10−13.87 × 10−13.59 × 10−13.11 × 10−13.36 × 10−14.08 × 10−13.03 × 10−12.96 × 10−1
Worst3.86 × 10−13.95 × 10−14.26 × 10−13.80 × 10−13.62 × 10−14.15 × 10−14.39 × 10−13.49 × 10−13.38 × 10−1
Mean3.75 × 10−13.67 × 10−14.09 × 10−13.71 × 10−13.33 × 10−13.64 × 10−14.19 × 10−13.27 × 10−13.21 × 10−1
Std5.74 × 10−31.21 × 10−21.30 × 10−26.20 × 10−31.46 × 10−22.56 × 10−28.38 × 10−31.08 × 10−21.04 × 10−2
Accuracy80.7679.3380.2979.8177.8878.8579.3381.7381.94
Rank364587621
LiverBest4.05 × 10−13.70 × 10−14.52 × 10−14.25 × 10−13.49 × 10−13.60 × 10−14.85 × 10−13.39 × 10−13.00 × 10−1
Worst4.57 × 10−14.59 × 10−14.85 × 10−14.64 × 10−13.97 × 10−15.66 × 10−15.11 × 10−13.89 × 10−13.60 × 10−1
Mean4.32 × 10−14.12 × 10−14.72 × 10−14.45 × 10−13.76 × 10−14.42 × 10−14.94 × 10−13.59 × 10−13.38 × 10−1
Std1.54 × 10−22.70 × 10−21.04 × 10−29.28 × 10−31.38 × 10−25.63 × 10−27.52 × 10−31.47 × 10−21.55 × 10−2
Accuracy69.4971.1956.7860.1772.8860.5956.7874.5875.43
Rank548736821
SeedsBest6.25 × 10−24.32 × 10−22.35 × 10−18.39 × 10−21.74 × 10−34.29 × 10−22.20 × 10−11.29 × 10−21.43 × 10−2
Worst3.84 × 10−11.66 × 10−16.76 × 10−11.19 × 10−18.10 × 10−23.57 × 10−15.12 × 10−19.35 × 10−29.35 × 10−2
Mean2.25 × 10−18.63 × 10−23.99 × 10−19.85 × 10−24.64 × 10−21.04 × 10−13.47 × 10−15.77 × 10−24.94 × 10−2
Std1.48 × 10−13.35 × 10−21.38 × 10−18.74 × 10−32.05 × 10−28.62 × 10−27.81 × 10−22.23 × 10−22.18 × 10−2
Accuracy73.3890.1470.4292.9692.9688.7378.8794.3794.43
Rank748335621
WineBest4.68 × 10−28.55 × 10−33.93 × 10−11.97 × 10−28.68 × 10−68.55 × 10−32.39 × 10−18.54 × 10−30
Worst6.07 × 10−13.32 × 10−17.26 × 10−17.55 × 10−25.98 × 10−24.27 × 10−16.48 × 10−11.03 × 10−15.12 × 10−2
Mean2.91 × 10−18.49 × 10−25.36 × 10−14.23 × 10−23.28 × 10−21.34 × 10−14.56 × 10−14.49 × 10−21.75 × 10−2
Std1.62 × 10−17.77 × 10−28.74 × 10−21.60 × 10−21.56 × 10−21.42 × 10−11.05 × 10−12.74 × 10−21.47 × 10−2
Accuracy86.8991.8054.1088.5286.8890.1677.0593.4495.08
Rank639574821
IrisBest3.56× 10−29.01× 10−41.95 × 10−19.34× 10−26.57× 10−44.60× 10−32.12 × 10−100
Worst2.30 × 10−11.21 × 10−16.02 × 10−11.54 × 10−14.56× 10−23.85 × 10−14.58 × 10−17.77× 10−27.07× 10−2
Mean1.55 × 10−14.55× 10−23.99 × 10−11.21 × 10−12.23× 10−28.86× 10−23.35 × 10−14.27× 10−23.64× 10−2
Std6.72× 10−23.16× 10−21.20 × 10−11.98× 10−21.15× 10−21.28 × 10−16.16× 10−22.20× 10−21.48× 10−2
Accuracy89.2285.2990.2097.5588.2490.2090.2098.0498.23
Rank574364421
StatlogBest1.85 × 10−11.20 × 10−13.17 × 10−11.74 × 10−11.51 × 10−11.97 × 10−13.22 × 10−11.02 × 10−18.25× 10−2
Worst5.57 × 10−12.92 × 10−14.81 × 10−12.30 × 10−12.51 × 10−13.03 × 10−14.93 × 10−12.42 × 10−11.74 × 10−1
Mean2.64 × 10−12.36 × 10−13.92 × 10−11.96 × 10−11.79 × 10−12.37 × 10−14.09 × 10−11.56 × 10−11.23 × 10−1
Std7.46 × 10−24.60 × 10−24.52 × 10−21.57 × 10−22.41 × 10−22.90 × 10−25.01 × 10−24.32 × 10−22.68 × 10−2
Accuracy81.5280.4367.3980.4383.7080.437585.8785.96
Rank457535621
XORBest2.53× 10−3002.21× 10−201.77 × 10−563.04 × 10−300
Worst8.96 × 10−25.00 × 10−15.00 × 10−16.51 × 10−24.58 × 10−112.50 × 10−13.27 × 10−100
Mean1.90 × 10−23.00 × 10−12.24 × 10−13.49 × 10−22.30 × 10−121.25 × 10−21.87 × 10−100
Std2.01 × 10−22.51 × 10−12.19 × 10−11.28 × 10−21.02 × 10−115.59 × 10−21.08 × 10−100
Accuracy100507575507550100100
Rank132232311
BalloonBest1.37 × 10−9001.83× 10−5001.82× 10−400
Worst9.06 × 10−504.96 × 10−19.38× 10−405.06 × 10−302.13 × 10−100
Mean2.36 × 10−501.45 × 10−12.23× 10−402.54 × 10−316.22 × 10−200
Std3.12 × 10−501.56 × 10−12.49× 10−401.13 × 10−306.55 × 10−200
Accuracy100100801008080100100100
Rank112122111
CancerBest3.59 × 10−23.35 × 10−25.92 × 10−23.44 × 10−22.43 × 10−23.69 × 10−27.07 × 10−22.44 × 10−22.15 × 10−2
Worst2.55 × 10−17.77 × 10−22.91 × 10−14.82 × 10−24.84 × 10−25.80 × 10−21.58 × 10−15.01 × 10−24.50 × 10−2
Mean7.66 × 10−24.78 × 10−22.22 × 10−14.41 × 10−23.70 × 10−24.62 × 10−21.15 × 10−13.84 × 10−23.45 × 10−2
Std7.31 × 10−29.00 × 10−37.57 × 10−23.59 × 10−35.19 × 10−35.51 × 10−32.60 × 10−26.08 × 10−37.04 × 10−3
Accuracy999998999999999999
Rank112111111
DiabetesBest3.22 × 10−13.03 × 10−13.58 × 10−13.17 × 10−12.88 × 10−13.15 × 10−13.87 × 10−12.93 × 10−12.70 × 10−1
Worst3.74 × 10−14.13 × 10−14.58 × 10−13.63 × 10−13.22 × 10−15.41 × 10−14.83 × 10−13.15 × 10−13.12 × 10−1
Mean3.45 × 10−13.32 × 10−14.14 × 10−13.38 × 10−13.03 × 10−13.90 × 10−14.47 × 10−13.03 × 10−12.90 × 10−1
Std1.53 × 10−22.48 × 10−22.64 × 10−21.33 × 10−27.50 × 10−36.06 × 10−22.76 × 10−25.61 × 10−39.99 × 10−3
Accuracy78.1677.0174.3380.0877.7878.9367.8280.8480.93
Rank578364921
GeneBest2.43 × 10−17.14 × 10−22.83 × 10−15.08 × 10−47.14 × 10−21.57 × 10−13.77 × 10−17.95 × 10−110
Worst4.14 × 10−13.57 × 10−19.00 × 10−12.45 × 10−12.71 × 10−13.57 × 10−14.50 × 10−11.43 × 10−15.71 × 10−2
Mean3.19 × 10−12.18 × 10−13.93 × 10−11.08 × 10−11.78 × 10−12.79 × 10−14.15 × 10−15.22 × 10−22.35 × 10−2
Std5.53 × 10−27.55 × 10−21.57 × 10−15.97 × 10−25.22× 10−25.78 × 10−21.86 × 10−24.38 × 10−21.75 × 10−2
Accuracy5.5619.448.3330.56258.332.7833.3340.33
Rank756346821
ParkinsonBest7.38 × 10−21.99 × 10−21.44 × 10−13.87 × 10−24.60 × 10−33.88 × 10−21.47 × 10−11.8 × 10−1210
Worst2.33 × 10−12.56 × 10−14.04 × 10−19.76 × 10−21.86 × 10−12.33 × 10−12.96 × 10−19.30 × 10−29.30 × 10−2
Mean1.44 × 10−18.66 × 10−23.06 × 10−17.40 × 10−25.36 × 10−21.33 × 10−12.17 × 10−14.35 × 10−24.50 × 10−2
Std5.42 × 10−26.39 × 10−27.75 × 10−21.71 × 10−24.33 × 10−26.27 × 10−23.69 × 10−23.88 × 10−23.11 × 10−2
Accuracy71.2172.7368.1872.7371.2172.7369.7075.7675.76
Rank325232411
SpliceBest5.42 × 10−12.80 × 10−14.67 × 10−13.88 × 10−13.36 × 10−14.50 × 10−16.63 × 10−11.27 × 10−19.81 × 10−2
Worst6.59 × 10−14.74 × 10−14.99 × 10−14.86 × 10−16.68 × 10−16.15 × 10−18.53 × 10−11.55 × 10−14.31 × 10−1
Mean5.94 × 10−13.86 × 10−14.86 × 10−14.33 × 10−14.29 × 10−15.47 × 10−17.76 × 10−11.41 × 10−11.93 × 10−1
Std3.32 × 10−26.09 × 10−29.92 × 10−32.15 × 10−28.67 × 10−24.54 × 10−25.22 × 10−28.04 × 10−31.12 × 10−1
Accuracy52.6574.1250.8864.7177.9459.1248.5382.6583.27
Rank748536921
WDBCBest4.55 × 10−22.23 × 10−21.40 × 10−14.42 × 10−22.76 × 10−24.57 × 10−21.72 × 10−11.38 × 10−28.24 × 10−3
Worst1.17 × 10−12.83 × 10−15.12 × 10−16.67 × 10−24.67 × 10−21.22 × 10−13.58 × 10−16.25 × 10−24.56 × 10−2
Mean7.96 × 10−25.96 × 10−23.32 × 10−15.50 × 10−23.41 × 10−27.18 × 10−22.67 × 10−14.16 × 10−22.61 × 10−2
Std2.16 × 10−25.54 × 10−21.15 × 10−16.83 × 10−34.78 × 10−32.04 × 10−24.76 × 10−21.22 × 10−21.03 × 10−2
Accuracy91.5294.5586.6795.7693.9492.1285.4598.7998.85
Rank748356921
ZooBest3.58 × 10−17.46 × 10−25.04 × 10−11.57 × 10−11.49 × 10−28.96 × 10−24.18 × 10−11.49 × 10−24.13 × 10−43
Worst7.76 × 10−15.52 × 10−17.78 × 10−13.73 × 10−12.54 × 10−17.01 × 10−11.6 × 1011.49 × 10−11.11 × 10−1
Mean5.11 × 10−13.07 × 10−16.11 × 10−12.82 × 10−11.13 × 10−13.01 × 10−19.52 × 10−16.33 × 10−24.40 × 10−2
Std1.14 × 10−11.24 × 10−17.18 × 10−25.46 × 10−26.36 × 10−21.69 × 10−13.37 × 10−13.22 × 10−23.22 × 10−2
Accuracy52.9452.9441.1867.6576.4764.715082.3587.27
Rank668435721
Table 5. Results of the p-value Wilcoxon rank-sum test.
Table 5. Results of the p-value Wilcoxon rank-sum test.
DatasetsALOAVOADOAFPAMFOSSASSOMPA
Blood6.80 × 10−81.20 × 10−66.80 × 10−86.80 × 10−82.85 × 10−29.17 × 10−86.80 × 10−87.58 × 10−4
Scale5.22 × 10−72.59 × 10−56.79 × 10−87.89 × 10−81.48 × 10−31.41 × 10−56.79 × 10−81.56 × 10−3
Survival6.80 × 10−86.80 × 10−86.80 × 10−86.80 × 10−81.14 × 10−21.06 × 10−76.80 × 10−81.08 × 10−3
Liver6.80 × 10−86.80 × 10−86.80 × 10−86.80 × 10−85.23 × 10−77.90 × 10−86.80 × 10−87.58 × 10−4
Seeds3.95 × 10−61.98 × 10−46.71 × 10−82.19 × 10−79.68 × 10−52.73 × 10−46.71 × 10−82.55 × 10−2
Wine7.31 × 10−81.72 × 10−56.29 × 10−83.54 × 10−51.56 × 10−33.04 × 10−66.29 × 10−87.64 × 10−4
Iris5.17 × 10−74.40 × 10−26.71 × 10−86.71 × 10−83.18 × 10−35.07 × 10−46.71 × 10−83.78 × 10−2
Statlog6.69 × 10−86.83 × 10−76.69 × 10−87.78 × 10−82.33 × 10−66.69 × 10−86.69 × 10−81.14 × 10−3
XOR8.01 × 10−94.68 × 10−52.55 × 10−58.01 × 10−91.05 × 10−78.01 × 10−98.01 × 10−9N/A
Balloon8.01 × 10−9N/A2.49 × 10−58.01 × 10−93.42 × 10−32.99 × 10−88.01 × 10−9N/A
Cancer7.49 × 10−66.59 × 10−66.68 × 10−84.12 × 10−51.98 × 10−41.58 × 10−56.68 × 10−88.54 × 10−4
Diabetes6.80 × 10−81.66 × 10−76.80 × 10−86.80 × 10−81.04 × 10−46.80 × 10−86.80 × 10−85.90 × 10−5
Gene5.97 × 10−86.07 × 10−86.07 × 10−82.47 × 10−66.07 × 10−85.93 × 10−86.07 × 10−81.59 × 10−2
Parkinson2.55 × 10−73.84 × 10−26.75 × 10−82.56 × 10−37.56 × 10−42.93 × 10−66.75 × 10−8N/A
Splice6.80 × 10−81.25 × 10−56.80 × 10−82.96 × 10−71.58 × 10−66.80 × 10−86.80 × 10−82.85 × 10−5
WDBC7.89 × 10−87.40 × 10−56.79 × 10−87.89 × 10−81.14 × 10−26.79 × 10−86.79 × 10−82.33 × 10−4
Zoo6.49 × 10−81.18 × 10−76.52 × 10−86.52 × 10−83.20 × 10−41.56 × 10−76.52 × 10−85.27 × 10−4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Xu, Y. Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm. Processes 2023, 11, 924. https://doi.org/10.3390/pr11030924

AMA Style

Zhang J, Xu Y. Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm. Processes. 2023; 11(3):924. https://doi.org/10.3390/pr11030924

Chicago/Turabian Style

Zhang, Jinzhong, and Yubao Xu. 2023. "Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm" Processes 11, no. 3: 924. https://doi.org/10.3390/pr11030924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop