Next Article in Journal
Smart Charging and Vehicle-to-Grid Integration of Electric Vehicles: Technical Insights, Cybersecurity Risks, and Mobility-OrientedControl Strategies
Previous Article in Journal
Balancing Workload Fairness in Task Assignment: Modeling via Piecewise Linear Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network Classifier for Ti6Al4V Selective Laser Melting Process Classification via Elephant Herding Optimization with Multi-Learning

1
School of Mechanical Engineering, Tiangong University, Tianjin 300387, China
2
School of Computer Science and Technology, Tiangong University, Tianjin 300387, China
3
School of Control Science and Engineering, Tiangong University, Tianjin 300387, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1746; https://doi.org/10.3390/app16041746
Submission received: 6 January 2026 / Revised: 3 February 2026 / Accepted: 4 February 2026 / Published: 10 February 2026
(This article belongs to the Section Additive Manufacturing Technologies)

Abstract

Classification techniques, reliant on annotated data for autonomous decision training, have become pivotal tools in diverse domains. These techniques rely on models like Backpropagation Neural Networks (BPNNs). However, BPNNs frequently trap local optima, leading to suboptimal classification accuracy, and its convergence speed is relatively slow, which affects efficiency in complex and non-linear process data classification applications. Existing optimization algorithms struggle to balance global exploration and local exploitation when adjusting BPNNs. Addressing these limitations, this paper proposes a BP classifier based on an Elephant Herding Optimization with Multi-Learning strategy (MLEHO), termed MLEHO-BPC. The proposed MLEHO establishes a triple learning framework. Firstly, a collective learning stage incorporates two different adaptive operators into the original algorithm to strengthen global exploration. Subsequently, a group learning stage is designed, integrating exemplar, deskmate, and random learning methods to enhance convergence efficiency. Finally, a tutorship learning stage, guided by fitness value discrimination, empowers the algorithm to escape local optima. Benchmark function tests confirm MLEHO’s superiority in convergence speed and stability over comparative algorithms. Furthermore, MLEHO replaces traditional gradient descent, reformulating the BPNN’s update mechanism to optimize weights and thresholds. Validated on classification datasets and a Ti6Al4V process classification problem, MLEHO-BPC demonstrates exceptional classification accuracy and robustness against other algorithm classifiers.

1. Introduction

Classification [1] stands as a pivotal technique within diverse domains. The objective of a classifier is to construct a discriminative model based on feature distributions, enabling the system to predict unknown samples and execute autonomous decisions. Numerous classification methods have been proposed to date, such as K-Nearest Neighbors [2], Decision Trees [3], Naive Bayes [4], logistic regression [5], and neural networks. Existing research [6,7] indicates that no single classifier is optimal for all datasets. Consequently, investigating more effective classification methods remains crucial for addressing application problems in diverse domains.
As a fundamental artificial neural network (ANN) [8], the Backpropagation Neural Network (BPNN) [9] derives its design principles from mathematical optimization theory. It is a fully connected feedforward network that employs an error mechanism. Compared to linear models, BPNN exhibits superior function approximation capability, rendering it particularly adept at solving high-dimensional nonlinear problems. This characteristic has positioned BPNN as an analytical tool across diverse engineering and scientific domains. Jin et al. introduced an improved BPNN to address a transformer fault diagnosis problem, exhibiting higher accuracy and lower error [10]. Cui et al. established a prediction model for geotechnical parameters by BPNN, ultimately demonstrating its generalization capability [11]. Comparative experiments, including with BPNNs, Support Vector Machine (SVM), and Random Forest (RF), confirm the superiority of BPNNs in predicting the performance of solid oxide fuel cells [12]. In [13], a BPNN was integrated with the least squares method for error compensation in laser ranging, significantly enhancing the measurement accuracy. Zhang et al. developed a wireless sensor data prediction system based on a BPNN, with results indicating its accurate forecasting of parameter variation trends [14].
BPNNs are also extensively applied to classification problems. Zheng et al. introduced an enhanced BPNN incorporating partial least squares and hierarchical cluster analysis, achieving robust classification accuracy [15]. A modified BPNN was employed for remote sensing image classification, with results demonstrating improved convergence speed and classification accuracy [16]. Leveraging data processing and enhancement techniques, Gao et al. utilized a BPNN to classify and identify blood fluorescence spectra across diverse animals [17]. The literature [18] details a multilayer perceptron-based BPNN for classifying steel surface defects. Li et al. applied a BPNN to classify surrounding rock geology, with their findings indicating its utility in ensuring efficient construction [19]. In [20], a principal component analysis-based BPNN performed network traffic data classification.
However, BPNNs also exhibit inherent limitations, particularly a susceptibility to trap local optima and slow convergence, ultimately constraining performance accuracy. This problem originates primarily from the sensitivity of network parameters, namely, weights and thresholds. To overcome these limitations, researchers are increasingly integrating meta-heuristic algorithms with BPNNs to optimize and adjust network parameters. Ding et al. integrated a genetic algorithm (GA) with a BPNN, demonstrating generalization and stability on standard datasets [21]. A BPNN model incorporating beetle swarm optimization (BSO) was introduced in [22] for precise mobile vehicle weight prediction. Comparative analysis of multiple BPNN classifiers confirmed that the bat algorithm (BA)-BP achieves superior convergence speed and accuracy in road condition classification tasks [23]. To improve robotic control precision, Ref. [24] presented a fruit fly optimization algorithm (FOA)-based BPNN. Ref. [25] combined GA, ant colony optimization (ACO), and BPNN for ship trajectory prediction. Yu et al. developed a chaotic krill herd algorithm (KHA) variant integrated with BPNNs for kidney bean yield forecasting, with experimental results indicating minimal prediction error [26].
Selective laser melting (SLM), a critical branch of additive manufacturing, is undergoing a shift from traditional experience-driven to modern data-driven operations. Effectively handling process data has become a central research focus. In recent years, as Industry 4.0/5.0 and smart manufacturing technologies have advanced, the integration of advanced algorithms—such as deep learning and swarm intelligence—with SLM processing is reshaping the quality control paradigm for SLM, enabling continuous improvements in manufacturing precision and efficiency. A CNN model developed in [27] enables distribution estimation and counting of surface defects in SLM components. Ref. [28] presents an acoustic–deep learning hybrid approach to establish a mechanistic model for spatter characterization, enhancing process stability monitoring reliability. Ref. [29] applies CNN algorithms for porosity prediction, while [30] introduces a digital twin framework that rapidly predicts thermal deformation, supporting geometric precision and quality stability in SLM manufacturing. Notably, ref. [31] proposes a 3D generative adversarial network model achieving end-to-end optimization of lattice structures from design to fabrication, pioneering rapid customization pathways. Collectively, these advancements drive SLM toward the transformation of intelligent manufacturing, opening new industrial paradigms.
The selection of algorithms for industrial applications requires careful consideration of multiple factors, including data characteristics, real-time requirements, and computational constraints. Deep learning models [32] such as CNNs, LSTM, and Transformer have played significant roles in manufacturing defect detection and process optimization due to their strong automatic feature extraction and hierarchical learning capabilities. However, these complex architectures demand high-quality training data and substantial computational resources [33], posing notable deployment challenges in manufacturing environments. However, the combination of BPNNs and metaheuristic algorithms maintains advantages in the following aspects: in industrial settings with limited computational resources and a need for rapid iterative computing for real-time adjustments, the lightweight architecture of BPNNs coupled with metaheuristics is easier to deploy and implement, offering a more flexible parameter optimization framework. Moreover, for process parameter optimization problems with clear physical interpretations, this approach exhibits global approximation capabilities that better preserve physical consistency between inputs and outputs. These characteristics ensure that the hybrid method remains irreplaceable in manufacturing applications [34].
BPNN methods and SI algorithms also exhibit significant advantages in SLM. Li et al. [35] utilized a BPNN to classify extracted melt pool feature data, thereby reducing porosity. Xia et al. [36] integrated GA with BPNNs to predict the forming quality of SLM aluminum anodes and improve processing efficiency. Praveenkumar et al. [37] demonstrated that metaheuristic algorithms, particularly JAYA and Cohort Intelligence, effectively optimize LPBF parameters for Ti6Al4V, significantly enhancing hardness through rapid and stable convergence. Vaidyaa et al. [38] propose a hybrid approach combining neural networks with an optimization algorithm, targeting the parameter optimization of AlSi10Mg components manufactured using SLM. Work by Costa et al. [39] provides a practical model for engineers to optimize multiple LPBF process parameters simultaneously, achieving a desired compromise between mechanical properties like strength and ductility. Chaudhry et al. [40] combine neural networks with GA, differential evolution (DE), and particle swarm optimizer (PSO) to optimize SLM process parameters. However, for SLM process classification problems, existing methods mostly adopt basic approaches such as GA and PSO to optimize neural networks [41,42], failing to fully consider the complex and nonlinear data characteristics of classification tasks, resulting in limited breakthroughs in classification accuracy.
According to the aforementioned review, this paper proposes an Elephant Herding Optimization with a multi-learning strategy, designated as MLEHO. However, as noted in the literature [43], EHO, similar to other meta-heuristic algorithms, exhibits limited stability when handling large-scale parameter optimization problems and is highly prone to premature convergence. To address these limitations, the algorithm outlined in this research is developed along three key dimensions.
(1) A collective learning stage is established by incorporating two different adaptive operators into the original algorithm, significantly enhancing its efficiency in finding global optima. (2) A group learning stage is designed, integrating exemplar, deskmate, and random learning to accelerate convergence and improve local search performance. (3) A tutorial learning phase, guided by fitness value discrimination, is devised to improve algorithmic capability for escaping local optima.
Data classification constitutes another focus of this research. To enhance the classification capability of BPNNs, an MLEHO-based classifier is proposed, termed MLEHO-BPC. Its core lies in replacing traditional neural network training methods by leveraging MLEHO for network parameter optimization. Comprehensive evaluation across benchmark functions, classification datasets, and a Ti6Al4V process classification problem verifies that MLEHO-BPC achieves exceptional classification accuracy and reliable stability. In digital manufacturing ecosystems, intelligent classification of process quality data serves as a critical component for production control. The MLEHO-BPC integrates SI optimization with neural networks, incorporating both process parameters and physical constraints to achieve integration between the classification model and production execution. This lightweight classification framework is capable of adapting to rapid response requirements and supports flexible deployment, offering a reliable technical solution for quality control in smart manufacturing.
The subsequent sections of this manuscript proceed as outlined below. Section 2 introduces the BPNN and the original EHO. Section 3 details the design of MLEHO and the BPNN classifier. Section 4 executes the experimental analysis to validate the proposed algorithm and classifier performances. Section 5 summarizes the major conclusions derived from this research and future perspectives.

2. Related Works

2.1. BP Neural Network

The Backpropagation Neural Network (BPNN) is a widely employed artificial neural network model. As a universal machine learning method, it serves as the foundational architecture for numerous neural network variants [44]. A structure diagram of a BPNN is illustrated in Figure 1.
The standard BPNN comprises three layers. The input layer consists of linear processing units responsible for receiving external signals and transmitting them to the hidden layer. The hidden layer incorporates nonlinear activation functions to perform translation on the received signals. Finally, the output layer produces the ultimate result based on the processed information from the hidden layer. This architecture exemplifies the classic feedforward structure.
Assuming BPNN with M inputs and N outputs, and a hidden layer comprising K neurons, its network parameters include the input-to-hidden-layer weights w1 and the hidden-to-output-layer weights w2. The corresponding thresholds consist of b1 and b2 from the hidden layer and the output layer, respectively. The mathematical model of BPNN is formulated as follows.
h t = f w 1 x t + b 1 y t = g w 2 h t + b 2
where t denotes the current iteration. x(t) and y(t) correspond to the network input and output, respectively. h(t) represents the hidden-layer output at the t-th step. The activation functions for the hidden and output layers are denoted as f(∙) and g(∙), respectively.

2.2. Elephant Herding Optimization

Elephant Herding Optimization [45], which mimics the population behavior of elephant herd clan memberships, is a meta-heuristic algorithm based on swarm intelligence. This algorithm is summarized in three iteration methods. The herd is divided into a matriarch, which is the best individual approaching the center of the herd, an adult male, which is the worst individual, leaving the herd for a random position, and other members, which follow the matriarch. The mathematical expression is in Equation (2).
E b e s t = β × E m e a n t   E w o r s t = l b + r a n d × u b l b   E i t + 1 = E i t + α × E b e s t * E i t × r 1
where E represents the position of the elephant herd. t is the current number of iterations. E b e s t is the optimal individual and E w o r s t is the worst individual in the population. E m e a n t represents the mean position of the population at the t-th iteration. ub and lb are the upper and lower boundaries, respectively. α, β, and r1 are random numbers in the range [0, 1].

3. Methods

3.1. Improve Algorithm

In this section, a Multi-Learning Elephant Herding Optimization (MLEHO) based on the collective, group, and tutorship learning stages is proposed to improve the shortcomings of EHO. The collective learning stage introduces two different adaptive operators in the original version. The group learning stage uses exemplar, deskmate, and random learning methods. The tutorship learning stage involves a selection update strategy based on tutored learning factors. The pseudo-algorithm of the MLEHO is given in Algorithm 1.
Algorithm 1 MLEHO
Initialization phase:
1Initialize the population Ei
2Calculate the fitness values
3Find E b e s t * and E w o r s t * according to the fitness scores
4while t ≤ Max_iter do
5Collective Learning
6 for i = 1, 2, …, N
7 Update parameters β 1 , m 1 , r 1 and p
8 if E i t = E b e s t *
9 e i = β 1 × E m e a n t
10 else if E i t = E w o r s t *
11 if r a n d < 0.5
12 e i = E b e s t * m 1 l b + r a n d × u b l b
13 else
14 e i = E b e s t * + m 1 l b + r a n d × u b l b
15 end if
16 else
17 e i = E i t + p × E b e s t * E i t × r 1
18 end if
19 if f i t e i < f i t E i t
20 E i t = e i
21 end if
22 end for
23Group Learning
24 for i = 1, 2, …, N
25 Update parameters k , s 1 , s 2 , β 2 , β 3 , r 2 , α 1 , α 2 , m 2 and m 3
26 if k = 1
27 e i = E b e s t * + β 2 × E b e s t * E i t
28 else if k = 2
29 e i = α 1 × e s 1 + β 3 * e s 1 E i t + α 2 × E i + 1 t
30 else if k = 3
31 if f i t e i < f i t e s 2
32 e i = E i t + m 2 E i t E b e s t *
33 else
34 e i = E i t + m 2 E b e s t * m 3 × E i t
35 end if
36 end if
37 if f i t e i < f i t E i t
38 E i t = e i
39 end if
40 end for
41Tutorship Learning
42 for i = 1, 2, …, N
43 Update parameters t l , l e v y , σ , j 1 and j 2
44 if t l < 0.3
45 e i = E b e s t * + l e v y e j 1 1 μ × e j 2
46 else if
47 e i = E i t
48 end if
49 if f i t e i < f i t E i t
50 E i t + 1 = e i
51 end if
52 end for
53Find E b e s t * according to the fitness scores
54end while

3.2. Collective Learning Stage

In order to enable EHO to explore the search space more comprehensively to improve the efficiency of discovering the global optimal position, the proposed algorithm introduces two different adaptive parameters and defines the original version as a collective learning stage. The mathematical expression of this improvement is as follows:
E b e s t = β 1 × E m e a n t   E w o r s t = E b e s t * ± m 1 l b + r a n d × u b l b   E i t + 1 = E i t + p × E b e s t * E i t × r 1  
In Equation (3), the iterative method for the optimal individual remains unchanged. The iterative method for the worst individual introduces an adaptive parameter m1 based on a Cauchy distribution and moves closer or further away from the optimal individual, which is determined by a random number, as in Algorithm 1 (see lines 11–15). The iterative method for the other individuals replaces the random number α with an adaptive parameter p based on cosines. These parameters are given in Equations (4) and (5).
m 1 = C a u c h y 0,1 × e x p 10 × t / M a x _ i t e r
p = c o s π × t 10 × M a x _ i t e r × 1 t M a x _ i t e r
where Max_iter denotes the maximum iterations. For ease of distinction, β 1 is the same random number as β . It is worth noting that the sign represents the vector product in this paper.

3.3. Group Learning Stage

In order to improve the convergence speed and local search performance, three different learning methods will be randomly selected by all individuals in the group learning stage, that is, the exemplar, deskmate and random learning methods.
Firstly, as the name suggests, the exemplar learning method is learning from the best one, which is defined as approaching to the optimal individual in the proposed algorithm. Its mathematical expression is as follow:
E i t + 1 = E b e s t * + β 2 × E b e s t * E i t
In Equation (6), β2 is a sine adaptive parameter, denoted as follows:
β 2 = 2 × s i n 2 π × r a n d × e x p r a n d × M a x i t e r t + 1 M a x i t e r + r a n d
Secondly, the deskmate learning method is defined, analogous to how students learn from their immediate deskmates in a classroom. This method involves an individual approaching and exchanging knowledge with neighboring individuals within the group. Its mathematical expression is as follows:
E i t + 1 = a 1 × E s 1 t + β 3 × E s 1 t E i t + a 2 × E i + 1 t
where s1 is a random constant in the range [1, N] and s 1 i . α 1 , α 2 and β 3 represent the adaptive parameters given in Equation (9).
α 1 = M a x _ i t e r + t 2 × M a x _ i t e r   α 2 = M a x _ i t e r t 2 × M a x _ i t e r   β 3 = e x p r a n d × e x p 3 c o s M a x _ i t e r t π t
Finally, the random learning method is defined as an individual blindly moving closer or further away from the optimal individual, which is determined by the fitness of another random individual. Its mathematical expression is as follows:
E i t + 1 = E i t + m 2 E i t E b e s t *   i f   f i t ( E i t ) < f i t ( E s 2 t ) E i t + m 2 E b e s t * m 3 × E i t   o t h e r w i s e
where s2 is a random constant in the range [1, N] and s 2 i . m2 denotes a set of stochastic values conforming to Cauchy distribution. m3 represents a sine random number. The mathematical expression of m2 and m3 is as follows:
m 2 = C a u c h y 0,1   m 3 = 2 × s i n r a n d + π 2

3.4. Tutorship Learning Stage

The tutorship learning stage is defined as a random iterative updating of the worse individuals according to the tutored learning factor, a discriminant based on fitness value, by utilizing the Lévy flight operator to enhance algorithmic capability for escaping from local extremum values. Its mathematical expression is as follows:
E i t + 1 = E b e s t * + l e v y E j 1 t 1 μ × E j 2 t   i f   t l < 0.2 E i t   o t h e r w i s e
In Equation (12), the sign μ denotes a binary random value within the interval [0, 1]. The Lévy flight operator levy and tutored learning factor tl are calculated using Equations (13) and (14), respectively.
l e v y = 0.01 × u × σ v 1 β ,   σ = Γ 1 + β × s i n π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
t l = f i t m a x f i t i f i t m a x f i t m i n
where u and v are defined as random numbers in range [0, 1]; β denotes a constant and is set to 1.5. fitmax, fitmin and fiti represent the fitness values of the maximum, minimum and individual i, respectively.

3.5. Classifier Design

In constructing the complete MLEHO-BPC model, the design of the hidden layer constitutes a critical step. The selection of the number of neurons directly influences the overall performance of BPNNs. An insufficient number leads to irreversible information loss during interlamination propagation, resulting in failure to attain target accuracy requirements. Conversely, excessive neurons significantly increase model complexity, thereby elevating the risk of overfitting. Furthermore, the hidden-layer configuration indirectly affects the difficulty of optimizing weights and thresholds. Therefore, determining an appropriate neuron count is essential prior to constructing the MLEHO-BPC. This research adopts the empirical formula proposed in the literature [46], expressed as follows.
K = M + N + a
where a denotes an integer within the range [1, 10].
As noted previously, the proposed MLEHO integrates exceptional global exploration, local exploitation capabilities, and an ability to escape local optima. Leveraging these strengths, MLEHO serves as an adaptive global training strategy to replace traditional gradient descent in BPNNs, thereby effectively resolving the limitation of model accuracy. The pseudo-algorithm description of MLEHO-BPC is provided in Algorithm 2.
Algorithm 2 MLEHO-BPC
Input: Dataset sample
1Normalized dataset
2Selection of training set by the stratified k-fold cross-validation method
3for i = 1, 2, …, k
4 Initialize BPNN and MLEHO algorithm parameters
5 Calculate and find E*best according to the fitness scores
5 while t ≤ T do
6 Update the weights and thresholds by using Equations (3)–(14)
7 Calculate and find E*best according to the fitness scores
8 end while
9 Get E*best for i-th
10 Update the weights and thresholds and Training BP neural network
11 Output classification prediction results
12end for

3.6. Complexity Analysis

The space complexity of SI algorithms is primarily determined by the maximum space required during the search process. For MLEHO, its space requirements mainly come from three components: the individual position matrix, the historical best solutions, and temporary computational variables. Given a population size of N and a problem dimension of d, the space complexity of the individual position matrix is O (N × d). The storage of the global best solution and historical individual best solutions requires O (d) and O (N × d), respectively. Considering that temporary vectors generated during iterations can often reuse storage space, the overall space complexity remains O (N × d). This property enables the algorithm to maintain relatively low space requirements even when dealing with high-dimensional optimization problems.
Time complexity can be evaluated through a step-by-step analysis of the algorithm’s key operations. In this study, Big-O notation is employed to express time complexity based on the scale of input data, focusing primarily on the most time-consuming steps during execution. For MLEHO, the main computational steps include population initialization, iterative updating of individual positions, and fitness evaluation. The overall time complexity is derived by analyzing these components.
During the population initialization phase, generating N d-dimensional initial solutions requires O (N × d) time complexity. In the iterative individual update phase, each iteration involves updating the positions and states of all individuals, with a per-update computational complexity of O(d). Thus, the time complexity per iteration is O (N × d). Given a maximum number of iterations Tmax, the total complexity for this phase becomes O (N × d × Tmax). For the fitness evaluation phase, evaluating one individual takes O(f(d)) time, where f denotes the fitness function. In this study, the fitness value corresponds to the classification accuracy of the d-dimensional individual, and its time complexity is linear with respect to d. Overall, the time complexity of the algorithm is O (N × d × Tmax).

4. Experiment and Results

4.1. Benchmark Functions Test

This section employs 20 benchmark [47,48,49,50] functions to systematically evaluate the performance of MLEHO. As listed in Table 1, F1–F7 are unimodal functions, designed to assess the optimization precision and stability when only a single global minimum exists. F8–F13 in Table 2 are multimodal functions, evaluating the algorithm’s capability to locate the global optimum when multiple local minima interfere. F14–F20 in Table 3 are fixed-dimension multimodal functions, primarily verifying the algorithm’s convergence rate and optimization efficiency in lower-dimensional problems.

4.2. Experimental Parameters Setting

All experiments encompass ablation experiments and comparative experiments. In ablation experiments, the proposed learning strategy stages are systematically evaluated for the influence of algorithm performance. In comparative experiments, the proposed MLEHO will be compared with EHO, PSO [51], DE [52], and BA [53]. For the above algorithms, the population size and maximum iterations have been tuned as 30 and 500, with all runs executed under identical random seeds. Each algorithm undergoes 20 independent tests on the benchmark functions presented in Table 1, Table 2 and Table 3, and Table 4 displays the parameters of algorithms. These aggregated results are documented in Table 5 and Table 6.

4.3. Experimental Result Analysis

The ablation experiment results are presented in Table 5, in which C, G, and T denote collective, group, and tutorship learning strategies, respectively. It is evident that introducing each learning strategy individually enhances algorithm performance, highlighting the effectiveness of the proposed strategies. For F3, which has only one global minimum, the T strategy yields the most significant performance improvement. This is because the T strategy enables rapid adjustment of search positions, directly targeting the global optimal direction. For F12, which is characterized by numerous oscillating local extrema, the C strategy is most effective. This is attributed to the C strategy improving the efficiency of discovering the global optimal position. For the fixed multimodal function F15, the G strategy performs best. This is because the G strategy improves local search performance and reduces redundant exploration. Compared to individual strategies, the combination of learning strategies further enhances algorithm performance, which further validates the effectiveness of the proposed algorithm.
As evidenced by the comparative experimental results in Table 6, MLEHO demonstrates unequivocally superior performance on F1–F7, F9–F13, F15, and F17–F19. For F8, MLEHO achieves only the optimal value. This limitation arises because F8 is a bowl-shaped function characterized by numerous local extrema exhibiting high complexity. Regarding F14, although MLEHO similarly attains the optimal and mean value, its standard deviation metric is only higher than that of the top-ranked EHO. This is attributable to F14’s continuous yet highly irregular shape. Likewise for F16, MLEHO secures the optimal and mean values, and its standard deviation is only higher than that of DE. For F20, MLEHO ranks second in performance, with its mean and standard deviation metrics inferior to DE. This is due to the function’s global minimum being surrounded by numerous local minima radiating outward in concentric wave-like patterns. To validate and illustrate the performance of MLEHO more intuitively, Figure 2 and Figure 3 display the corresponding convergence curves and boxplots, respectively. For most functions, MLEHO exhibits faster convergence and more stable performance compared to other algorithms, reflecting a good balance between exploration and exploitation. However, a notable exception exists for F8 and F20: MLEHO’s convergence curves on these two functions are slightly flatter than DE, as discussed earlier. Despite these minor shortcomings, MLEHO’s overall performance still outperforms the comparison algorithms, indicating it remains a well-rounded and robust optimization algorithm.
Benchmark functions effectively reflect an algorithm’s capability in solving optimization problems. However, these are primarily composed of mathematical test functions and have certain limitations. Some evaluation results exhibit relatively small magnitudes and lack real application contexts. These small magnitudes may fall below required precision levels in practical applications. To address these limitations, we further evaluated the proposed method on public classification datasets and the Ti6Al4V process classification problem to validate its effectiveness.

4.4. Classification Benchmark Dataset Test

In this part, the performance of the proposed MLEHO-BPC is investigated. In addition, the comparative results with EHO-BPC, PSO-BPC, DE-BPC, BA-BPC, ADAM-BPC, NADAM-BPC and Look-ahead-BPC are provided. The performance test compares seven classic benchmarks for data classification prediction problems. Table 7 provides a comprehensive overview of seven classification datasets, including their name, sample sizes, feature counts, number of categories, and sample distribution.
In all experiments, a stratified seven-fold cross-validation method is used to evaluate the classifier’s performance measures more accurately. This means that 1/7 of the overall dataset is randomly considered as the test set, and the rest is the training set. Unlike standard cross-validation, the stratified method randomly divides every class into seven equal-sized sets, so that every fold is able to better represent the overall data. The classifier is run seven times based on this method, with a different set as the test set each time. The test evaluation includes the maximum, mean and standard deviation of the prediction accuracy for both training and test datasets.
Table 8 and Table 9 give the experimental results of MLEHO-BPC and comparative classifiers for the data classification prediction problem on the training and test point sets, respectively.
As evidenced in Table 8, MLEHO-BPC achieves superior accuracy metrics across all training datasets compared to other classifiers, including the state-of-the-art optimizers (Adam, Nadam, and Lookahead). The Friedman test yields a test statistic (30.55) significantly exceeding the critical value (14.067), indicating substantial algorithmic differences. The post hoc Nemenyi test results reveal MLEHO-BPC achieves an average rank of 1.0, demonstrating superior performance and statistically distinct differences from all other algorithms. Specifically, DE (average rank of 3.43) shows significant divergence from NADAM (average rank of 6.71). No significant differences are observed between other algorithm pairs. While these advanced methods occasionally achieve competitive or even singularly higher Max accuracy on certain datasets, such as Thyroid and Jain, MLEHO-BPC demonstrates significantly better and more consistent performance overall. This is reflected in its highest Mean accuracy and, crucially, the lowest Std across the vast majority of datasets, indicating superior robustness and stability.
This advantage remains pronounced in the results of the testing set presented in Table 9, with the exception of the Seeds and WBC. The Friedman test yields a test statistic (21.27) significantly exceeding the critical value (14.067), indicating substantial algorithmic differences. The post hoc Nemenyi test results reveal MLEHO-BPC achieves the lowest average rank of 1.2, demonstrating superior performance and statistically significant divergence from BA, EHO, and ADAM. For these datasets, while the advanced optimizers can sometimes reach high peak performances, they often suffer from significantly higher standard deviation and lower mean values, particularly for Adam-BPC. MLEHO-BPC exhibits marginally lower standard deviations than DE-BPC by approximately 2.80 × 10−2 and 2.00 × 10−4, respectively. Collectively, the experimental results confirm that MLEHO effectively optimizes BPNN weights and thresholds, significantly enhancing classification accuracy. Collectively, the experimental results confirm that MLEHO effectively optimizes BPNN weights and thresholds, delivering significantly enhanced classification accuracy and remarkable robustness. It consistently outperforms both classic swarm Intelligence algorithms and state-of-the-art optimizers.

4.5. Ti6Al4V Process Classification Problem

In the selective laser melting (SLM) process for manufacturing Ti6Al4V components, the formation quality of single-track melt pool critically determines dimensional consistency and mechanical properties. Abnormal morphological variations in melt pools may induce defects like cracks and internal porosity, severely compromising structural integrity. Therefore, accurate classification of Ti6Al4V single-track morphological features is essential to ensure SLM process stability and product quality. An SLM process classification application is introduced for Ti6Al4V single-track in this section.
The dataset is constructed through orthogonal numerical simulation experiments, in which laser power (50–300 W) and scanning speed (0.35–2.10 m/s) are sampled at equal intervals within their respective ranges through multiple repeated trials. After image acquisition, abnormal data are filtered out based on morphological integrity and physical plausibility criteria, resulting in 441 valid samples. Based on empirical observation of melt pool morphology, these samples are categorized into three categories. A total of 273 “normal” samples exhibit continuous and uniform melt pools with smooth edges and no visible defects, 122 “keyhole” samples show melt pools with significant width and depth fluctuations due to excessive energy input, and 46 “no-continuous” samples present fractured or segmented melt pools resulting from insufficient energy input. Geometric feature extraction was performed using ImageJ v154 to quantify features of the melt pool.
To achieve accurate classification, seven key features are extracted through the aforementioned image measurement methods. These features include keyhole depth, melt pool depth, and effective width, which describe the geometric characteristics and are manually annotated; height ratio and linear energy density, which reflect energy input and fusion state—specifically, the height ratio is calculated as the powder layer height divided by the melt pool depth, while linear energy density is defined as laser power divided by scanning speed—as well as standard deviation and coefficient of variation, which indicate process variability; in this context, the standard deviation is computed from the uniformly annotated melt track width, and the coefficient of variation is obtained by dividing the standard deviation by the average melt pool width. These features characterize melt pool geometry, energy distribution, and process stability, providing comprehensive inputs for the classification model. Table 10 details the number of samples, features, and categories, as well as sample distribution per class.
As illustrated in Figure 4 and Figure 5, the accuracy convergence curve of training and testing sets clearly demonstrates that MLEHO-BPC achieves both faster convergence and higher final accuracy compared to EHO-BPC, PSO-BPC, DE-BPC, BA-BPC, Adam-BPC, Nadam-BPC, and Lookahand-BPC. This performance advantage is further quantitatively confirmed by the results summarized in Table 11 and Table 12, where MLEHO-BPC dominates maximum accuracy, mean accuracy, and standard deviation metrics across both training and testing datasets. On the training set, MLEHO-BPC reaches a maximum accuracy of 99.47%, surpassing other algorithmic classifiers by 1.85–5.03%, and achieves a mean accuracy of 98.26%, outperforming them by 2.04–10.69%. Most notably, its strikingly low standard deviation (8.20 × 10−3) confirms exceptional stability by preventing overfitting during training. For the testing set, MLEHO-BPC maintains robust performance with a maximum accuracy of 98.41%, surpassing other algorithmic classifiers by 0.00–6.35%, and a mean accuracy of 96.60%, exceeding them by 0.45–10.89%. Critically, its standard deviation (2.32 × 10−2) is the smallest, confirming strong generalization capability to unseen data. To validate MLEHO-BPC’s superiority in imbalanced classification tasks, Table 13 presents precision (P), recall (R), and macro-F1 scores for each classifier. The data in Table 13 are expressed as mean ± standard deviation. In Classes 1 and 2, MLEHO-BPC consistently outperformed all competitors across P, R, and macro-F1 metrics. For Class 3, the proposed classifier improved P and R by 2.7% and 9.4% over the suboptimal Look-ahead-BPC variant. The MLEHO-BPC achieves a macro-F1 of 0.959 ± 0.02, significantly outperforming Look-ahead-BPC (0.930 ± 0.049) with the smallest standard deviation. Figure 6 details the confusion matrix of MLEHO-BPC. It is worth noting that although Adam-BPC, Nadam-BPC, and Look-ahead-BPC perform close to MLEHO-BPC on certain metrics, such as Nadam-BPC matching MLEHO-BPC in standard deviation on the training set, MLEHO-BPC maintains an overall advantage, particularly in generalization capability and stability.
Collectively, these results validate that MLEHO-BPC converges faster and more stable than comparative classifiers while achieving higher classification accuracy for Ti6Al4V process classification. This not only conclusively confirms its efficacy and robustness as a viable tool for process data classification but also, with its lightweight architecture and superior performance, provides an efficient and reliable solution for deploying algorithms on small edge computing nodes and enabling rapid quality classification control in digital manufacturing ecosystems.

4.6. Discussion and Limitations

This study numerically simulates the selective laser melting process of Ti6Al4V alloy using the discrete element method (DEM) [55] and finite volume method (FVM) [56], constructing a dataset through manual annotation to analyze correlations between process parameters and formed morphology. While the simulation rigorously incorporates physical laws of manufacturing processes and has been validated by prior studies [57,58], the model inevitably simplifies real-world complexities, potentially introducing discrepancies between simulated and actual outcomes. Additionally, although classification rules are established between different labels, manual labeling remains susceptible to subjective interpretation biases. To address these limitations, future work plans to integrate physical experiments with the simulation framework to mitigate deviations and further develop standardized morphology judgment rules for more accurate and objective analysis.

5. Conclusions

The current research addresses the bottlenecks of traditional BPNNs, namely, susceptibility to falling into local optima and slow convergence. To tackle data classification challenges, a BP classifier based on a Multi-learning Elephant Herding Optimization (MLEHO) is proposed. The proposed algorithm establishes a triple learning framework. Firstly, a collective learning stage is defined by incorporating two different adaptive operators into the original algorithm, aimed at strengthening its efficiency in finding global optima. Secondly, a group learning stage is designed, integrating exemplar, deskmate, and random learning methods to enhance convergence rate and local search performance. Finally, a tutorial learning stage, guided by fitness value discrimination, is devised to enhance the algorithm’s ability to escape local optima. Benchmark function tests demonstrate MLEHO’s superiority in convergence velocity and stability over other algorithms. Furthermore, MLEHO successfully replaces conventional training methods by optimizing BPNN weights and thresholds. To systematically evaluate the accuracy and generalization capability of MLEHO-BPC, seven classic benchmark classification datasets are tested. Additionally, in the Ti6Al4V process classification problem, MLEHO-BPC achieved a top accuracy of 98.41%, with all other performance metrics significantly outperforming other algorithmic classifiers. These results fully validate its effectiveness as a robust data classification tool and contribute to enhancing manufacturing quality control, thereby reducing defects and waste in production. Further analysis revealed a positive correlation between neural network parameter dimension and convergence difficulty. However, this study still has certain limitations in terms of scalability and adaptability, and its applicability across different problem domains remains unknown, which needs to be addressed in future work. Therefore, future research will focus on continuous algorithmic improvements to tackle high-dimensional optimization problems, thereby enhancing both the scalability and generalizability of the framework. We will further refine the algorithm to strengthen its capability in solving high-dimensional problems and improve its scalability for larger datasets. Additionally, we plan to expand research to include more alloys and processes, enhancing the algorithm’s adaptability to diverse application scenarios.

Author Contributions

Conceptualization, H.C. and X.L.; methodology, M.H.; software, R.N.; validation, S.X.; formal analysis, X.L.; investigation, S.X.; resources, H.C.; data curation, R.N.; writing—original draft preparation, S.X.; writing—review and editing, S.X.; visualization, Z.G.; supervision, H.C. and X.L.; project administration, H.C.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kotsiantis, S.; Zaharakis, I.; Pintelas, P. Machine learning: A review of classification and combining techniques. Artif. Intell. Rev. 2006, 26, 159–190. [Google Scholar] [CrossRef]
  2. Cunningham, P.; Delany, S. k-Nearest Neighbour Classifiers—A Tutorial, ACM. Comput. Surv. 2021, 54, 128. [Google Scholar] [CrossRef]
  3. Myles, A.; Feudale, R.; Liu, Y.; Woody, N.; Brown, S. An introduction to decision tree modeling. J. Chemom. 2004, 18, 275–285. [Google Scholar] [CrossRef]
  4. Wickramasinghe, I.; Kalutarage, H. Naive Bayes: Applications, variations and vulnerabilities: A review of literature with code snippets for implementation. Soft Comput. 2020, 25, 2277–2293. [Google Scholar] [CrossRef]
  5. Dreiseitl, S.; Ohno-Machado, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef]
  6. Sun, J. Learning algorithm and hidden node selection scheme for local coupled feedforward neural network classifier. Neurocomputing 2012, 79, 158–163. [Google Scholar] [CrossRef]
  7. Zhang, G. Neural networks for classification: A survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2000, 30, 451–462. [Google Scholar] [CrossRef]
  8. Abdolrasol, M.; Hussain, S.; Ustun, T.; Sarker, M.; Hannan, M.; Mohamed, R.; Ali, J.; Mekhilef, S.; Milad, A. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  9. Kaveh, M.; Mesgari, M. Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review. Neural Process. Lett. 2023, 55, 4519–4622. [Google Scholar] [CrossRef]
  10. Jin, Y.; Wu, H.; Zheng, J.; Zhang, J.; Liu, Z. Power Transformer Fault Diagnosis Based on Improved BP Neural Network. Electronics 2023, 12, 3526. [Google Scholar] [CrossRef]
  11. Cui, K.; Jing, X. Research on prediction model of geotechnical parameters based on BP neural network. Neural Comput. Appl. 2019, 31, 8205–8215. [Google Scholar] [CrossRef]
  12. Song, S.; Xiong, X.; Wu, X.; Xue, Z. Modeling the SOFC by BP neural network algorithm. Int. J. Hydrogen Energy 2021, 46, 20065–20077. [Google Scholar] [CrossRef]
  13. Wu, B.; Han, S.; Xiao, J.; Hu, X.; Fan, J. Error compensation based on BP neural network for airborne laser ranging. Optik 2016, 127, 4083–4088. [Google Scholar] [CrossRef]
  14. Zhang, W.; Kumar, M.; Liu, J. Multi-parameter online measurement IoT system based on BP neural network algorithm. Neural Comput. Appl. 2019, 31, 8147–8155. [Google Scholar] [CrossRef]
  15. Zheng, Y.; Wang, P.; Ma, J.; Zhang, H. Remote sensing image classification based on BP neural network model. Trans. Nonferrous Met. Soc. China 2005, 15, 232–235. [Google Scholar]
  16. Jia, W.; Zhao, D.; Shen, T.; Ding, S.; Zhao, Y.; Hu, C. An optimized classification algorithm by BP neural network based on PLS and HCA. Appl. Intell. 2015, 43, 176–191. [Google Scholar] [CrossRef]
  17. Gao, B.; Zhao, P.; Lu, Y.; Fan, Y.; Zhou, L.; Qian, J.; Liu, L.; Zhao, S.; Kong, Z. Study on Recognition and Classification of Blood Fluorescence Spectrum with BP Neural Network. Spectrosc. Spectr. Anal. 2018, 38, 3136–3143. [Google Scholar]
  18. Zhao, X.; Lai, K.; Dai, D. An improved BP algorithm and its application in classification of surface defects of steel plate. J. Iron Steel Res. Int. 2007, 14, 52–55. [Google Scholar] [CrossRef]
  19. Li, S.; Shen, Y.; Lin, P.; Xie, J.; Tian, S.; Lv, Y.; Ma, W. Classification method of surrounding rock of plateau tunnel based on BP neural network. Front. Earth Sci. 2024, 11, 1283520. [Google Scholar] [CrossRef]
  20. Dong, S.; Zhou, D.; Zhou, W.; Ding, W.; Gong, J. Research on Network Traffic Identification Based on Improved BP Neural Network. Appl. Math. Inf. Sci. 2013, 7, 389–398. [Google Scholar] [CrossRef]
  21. Ding, S.; Su, C.; Yu, J. An optimizing BP neural network algorithm based on genetic algorithm. Artif. Intell. Rev. 2011, 36, 153–162. [Google Scholar] [CrossRef]
  22. Xu, S.; Chen, X.; Fu, Y.; Xu, H.; Hong, K. Research on Weigh-in-Motion Algorithm of Vehicles Based on BSO-BP. Sensors 2022, 22, 2109. [Google Scholar] [CrossRef] [PubMed]
  23. Jia, D.; Zhang, C.; Lv, D. Evaluation of road condition based on BA-BP algorithm. J. Intell. Fuzzy Syst. 2021, 40, 331–348. [Google Scholar] [CrossRef]
  24. Bai, Y.; Luo, M.; Pang, F. An Algorithm for Solving Robot Inverse Kinematics Based on FOA Optimized BP Neural Network. Appl. Sci. 2021, 11, 7129. [Google Scholar] [CrossRef]
  25. Zheng, Y.; Lv, X.; Qian, L.; Liu, X. An Optimal BP Neural Network Track Prediction Method Based on a GA-ACO Hybrid Algorithm. J. Mar. Sci. Eng. 2022, 10, 1399. [Google Scholar] [CrossRef]
  26. Yu, L.; Xie, L.; Liu, C.; Yu, S.; Guo, Y.; Yang, K. Optimization of BP neural network model by chaotic krill herd algorithm. Alex. Eng. J. 2022, 61, 9769–9777. [Google Scholar] [CrossRef]
  27. Wang, R.X.; Cheung, C.F.; Wang, C.J.; Cheng, M.N. Deep learning characterization of surface defects in the selective laser melting process. Comput. Ind. 2022, 140, 103662. [Google Scholar] [CrossRef]
  28. Luo, S.Y.; Ma, X.Q.; Xu, J.; Li, M.L.; Cao, L.C. Deep Learning Based Monitoring of Spatter Behavior by the Acoustic Signal in Selective Laser Melting. Sensors 2021, 21, 7179. [Google Scholar] [CrossRef]
  29. Alamri, N.M.H.; Packianather, M.; Bigot, S. Optimizing the Parameters of Long Short-Term Memory Networks Using the Bees Algorithm. Appl. Sci. 2023, 13, 2536. [Google Scholar] [CrossRef]
  30. Chung, P.H.; Zhuang, J.R.; Pan, C.H. Evaluation and prediction of thermal defects in SLM-manufactured tibial components using FEM-based deep learning and statistic methods. Int. J. Adv. Manuf. Technol. 2024, 134, 691–709. [Google Scholar] [CrossRef]
  31. Eren, O.; Yüksel, N.; Börklü, H.R.; Sezer, H.K.; Canyurt, O.E. Deep learning-enabled design for tailored mechanical properties of SLM-manufactured metallic lattice structures. Eng. Appl. Artif. Intell. 2024, 130, 107685. [Google Scholar] [CrossRef]
  32. Khoei, T.T.; Slimane, H.O.; Kaabouch, N. Deep learning: Systematic review, models, challenges, and research directions. Neural Comput. Appl. 2023, 35, 23103–23124. [Google Scholar] [CrossRef]
  33. Qin, Y.M.; Tu, Y.H.; Li, T.; Ni, Y.; Wang, R.F.; Wang, H.H. Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation. Sustainability 2025, 17, 3190. [Google Scholar] [CrossRef]
  34. Lee, M.F.R. A Review on Intelligent Control Theory and Applications in Process Optimization and Smart Manufacturing. Processes 2023, 11, 3171. [Google Scholar] [CrossRef]
  35. Li, J.C.; Cao, L.; Xu, J.; Wang, S.; Zhou, Q. In situ porosity intelligent classification of selective laser melting based on coaxial monitoring and image processing. Measurement 2022, 187, 110232. [Google Scholar] [CrossRef]
  36. Xia, Q.F.; Li, Y.; Sun, N.; Song, Z.; Zhu, K.; Guan, J.; Li, P.; Tang, S.; Han, J. A Multi-Objective Genetic Algorithm-Based Predictive Model and Parameter Optimization for Forming Quality of SLM Aluminum Anodes. Crystals 2024, 14, 608. [Google Scholar] [CrossRef]
  37. Praveenkumar, V.; Jatti, V.S.; Saiyathibrahim, A.; Praveen Kumar, D.; Murali Krishnan, R.; Vinaykumar, S.J.; Santhosh, A.J. Optimizing laser powder bed fusion parameters for enhanced hardness of Ti6Al4V alloys: A comparative analysis of metaheuristic algorithms. AIP Adv. 2025, 15, 045024. [Google Scholar] [CrossRef]
  38. Vaidyaa, P.; John, J.J.; Puviyarasan, M.; Prabhu, T.R.; Prasad, N.E. Wire EDM Parameter Optimization of AlSi10Mg Alloy Processed by Selective Laser Melting. Trans. Indian Inst. Met. 2021, 74, 2869–2885. [Google Scholar] [CrossRef]
  39. Costa, A.; Palmeri, D.; Pollara, G.; Fichera, S. Multi-objective process parameters optimization of Ti-6Al-4V LPBF parts through a hybrid prediction–optimization approach. Int. J. Adv. Manuf. Technol. 2025, 138, 1739–1751. [Google Scholar] [CrossRef]
  40. Chaudhry, S.; Soulainmani, A. A Comparative Study of Machine Learning Methods for Computational Modeling of the Selective Laser Melting Additive Manufacturing Process. Appl. Sci. 2022, 12, 2324. [Google Scholar] [CrossRef]
  41. Barile, C.; Casavola, C.; Pappalettera, G.; Kannan, V.P.; Mpoyi, D.K. Acoustic Emission and Deep Learning for the Classification of the Mechanical Behavior of AlSi10Mg AM-SLM Specimens. Appl. Sci. 2023, 13, 189. [Google Scholar] [CrossRef]
  42. Barile, C.; Casavola, C.; Pappalettera, G.; Kannan, V.P. Damage Progress Classification in AlSi10Mg SLM Specimens by Convolutional Neural Network and k-Fold Cross Validation. Materials 2022, 15, 4428. [Google Scholar] [CrossRef]
  43. Li, J.; Lei, H.; Alavi, A.; Wang, G. Elephant Herding Optimization: Variants, Hybrids, and Applications. Mathematics 2022, 8, 1415. [Google Scholar] [CrossRef]
  44. Magoulas, G.; Vrahatis, M. Adaptive algorithms for neural network supervised learning: A deterministic optimization approach. Int. J. Bifurc. Chaos 2006, 16, 1929–1950. [Google Scholar] [CrossRef]
  45. Wang, G.; Deb, S.; Gao, X.; Coelho, L. A new metaheuristic optimisation algorithm motivated by elephant herding behaviour. Int. J. Bio-Inspir. Comput. 2016, 8, 394–409. [Google Scholar] [CrossRef]
  46. Ye, S. RMB exchange rate forecast approach based on BP neural network. Phys. Procedia 2012, 33, 287–293. [Google Scholar] [CrossRef]
  47. Yuan, B.; Gallagher, M. Experimental results for the special session on real-parameter optimization at CEC 2005: A simple, continuous EDA. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 2, pp. 1792–1799. [Google Scholar] [CrossRef]
  48. LaTorre, A.; Peña, J.-M. A comparison of three large-scale global optimizers on the CEC 2017 single objective real parameter numerical optimization benchmark. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 1063–1070. [Google Scholar] [CrossRef]
  49. Silagadze, Z.K. Finding Two-Dimesnional Peaks. Phys. Part. Nucl. Lett. 2007, 4, 73–80. [Google Scholar] [CrossRef]
  50. Cragg, E.E.; Levy, A.V. Study on Super memory Gradient Method for the Minimization of Functions. J. Optim. Theory Appl. 1969, 4, 191–205. [Google Scholar] [CrossRef]
  51. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  52. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  53. Yang, X.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspir. Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  54. Dheeru, D.; Graff, C. UCI Machine Learning Repository; School of Information and Computer Science, University of California: Irvine, CA, USA, 2019; Available online: http://archive.ics.uci.edu/ml (accessed on 10 May 2024).
  55. Luberto, L.; Luchini, D.; Payrebrune, K.M. A novel mesoscale transitional approach for capturing localized effects in laser powder bed fusion simulations. Powder Technol. 2024, 447, 120194. [Google Scholar] [CrossRef]
  56. Chen, X.; Mu, W.; Xu, X.; Liu, W.; Huang, L.; Li, H. Numerical analysis of double track formation for selective laser melting of 316L stainless steel. Appl. Phys. A 2021, 127, 586. [Google Scholar] [CrossRef]
  57. Hu, H.W.; Ding, X.P.; Wang, L.Z. Numerical analysis of heat transfer during multi-layer selective laser melting of AlSi10Mg. Optik 2016, 127, 8883–8891. [Google Scholar] [CrossRef]
  58. Le, T.; Lo, Y.; Tran, H. Multi-scale modeling of selective electron beam melting of Ti6Al4V titanium alloy. Int. J. Adv. Manuf. Technol. 2019, 105, 545–563. [Google Scholar] [CrossRef]
Figure 1. Structure diagram of BP neural network.
Figure 1. Structure diagram of BP neural network.
Applsci 16 01746 g001
Figure 2. Convergence graphs of MLEHO, EHO, PSO, DE and BA.
Figure 2. Convergence graphs of MLEHO, EHO, PSO, DE and BA.
Applsci 16 01746 g002
Figure 3. Boxplot of MLEHO, EHO, PSO, DE and BA.
Figure 3. Boxplot of MLEHO, EHO, PSO, DE and BA.
Applsci 16 01746 g003aApplsci 16 01746 g003b
Figure 4. Training accuracy (a) convergence curves and (b) boxplots of MLEHO and comparative classifiers.
Figure 4. Training accuracy (a) convergence curves and (b) boxplots of MLEHO and comparative classifiers.
Applsci 16 01746 g004aApplsci 16 01746 g004b
Figure 5. Testing accuracy (a) convergence curves and (b) boxplots of MLEHO and comparative classifiers.
Figure 5. Testing accuracy (a) convergence curves and (b) boxplots of MLEHO and comparative classifiers.
Applsci 16 01746 g005aApplsci 16 01746 g005b
Figure 6. The confusion matrix of MLEHO-BPC.
Figure 6. The confusion matrix of MLEHO-BPC.
Applsci 16 01746 g006
Table 1. Expression of F1–F7 benchmark functions.
Table 1. Expression of F1–F7 benchmark functions.
FunctionValue
F1: Sphere FunctionDimRangeFmin
F 1 = i = 1 n x i 2 30[−100, 100]0
F2: Schwefel’s Problem 2.22DimRangeFmin
F 2 = i = 1 n x i + i = 1 n x i 30[−10, 10]0
F3: Quartic Function, i.e., NoiseDimRangeFmin
F 3 = i = 1 n i x i 4 + r a n d o m 0,1 ) 30[−1.28, 1.28]0
F4: Bent Cigar FunctionDimRangeFmin
F 4 = x 1 2 + 10 6 i = 2 n x i 2 30[−100, 100]0
F5: Schwefel’s Problem 1.2DimRangeFmin
F 5 = i = 1 n j = 1 i x j 2 30[−100, 100]0
F6: Zakharov FunctionDimRangeFmin
F 6 = i = 1 n x i 2 + i = 1 n 0.5 x i 2 + i = 1 n 0.5 x i 4 30[−100, 100]0
F7: High Conditioned Elliptic FunctionDimRangeFmin
F 7 = i = 1 n 10 6 i 1 n 1 x i 2 30[−100, 100]0
Table 2. Expression of F8–F13 benchmark functions.
Table 2. Expression of F8–F13 benchmark functions.
FunctionValue
F8: Generalized Schwefel’s Problem 2.26DimRangeFmin
F 8 = i = 1 n x i s i n x i 30[−500, 500]−12,569.5
F9: Generalized Rastrigin’s FunctionDimRangeFmin
F 9 = i = 1 n x i 2 10 c o s 2 π x i + 10 30[−5.12, 5.12]0
F10: Ackley’s FunctionDimRangeFmin
F 10 = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
F11: Generalized Griewank’s FunctionDimRangeFmin
F 11 = 1 4000 i = 1 n x i 2 + i = 1 n cos x i i + 1 30[−600, 600]0
F12: Schaffer’s F6 FunctionDimRangeFmin
F 12 = g x 1 , x 2 + g x 2 , x 3 + + g x n 1 , x n + g x n , x 1
  g x , y = 0.5 + s i n 2 x 2 + y 2 0.5 1 + 0.001 x 2 + y 2 2
30[−100, 100]0
F13: Schwefel FunctionDimRangeFmin
F 13 = 418.9829 × n i = 1 n x i s i n x i 30[−500, 500]0
Table 3. Expression of F14–F20 benchmark functions.
Table 3. Expression of F14–F20 benchmark functions.
FunctionValue
F14: Bukin’s F6 FunctionDimRangeFmin
F 14 = 100 x 2 0.01 x 1 2 + 0.01 x 1 + 10 2[−15, 3]0
F15: Shekel’s Foxholes FunctionDimRangeFmin
F 15 = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2[−65.53, 65.53]1
F16: Kowalik’s FunctionDimRangeFmin
F 16 = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]3.07 × 10−4
F17: Six-Hump Camel-Back FunctionDimRangeFmin
F 17 = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.031
F18: Branin FunctionDimRangeFmin
F 18 = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 15]0.398
F19: Hartman’s FamilyDimRangeFmin
F 19 = i = 1 4 c i e x p j = 1 3 a i j x j p i j 2 3[1, 3]−3.86
F20: Levy’s F13 FunctionDimRangeFmin
F 20 = s i n 2 3 π x 1 + x 1 1 2 1 + s i n 2 3 π x 2 + x 2 1 2 1 + s i n 2 2 π x 2 2[−10, 10]0
Table 4. Algorithm parameter settings.
Table 4. Algorithm parameter settings.
AlgorithmsSettings
EHOα = 0.5, β = 0.1
PSOc1 = 2, c2 = 2, w = 0.8
DEF0 = 0.5, CR = 0.9
BAA = 0.7, r = 0.5, Qmin = 0, Qmax = 2
LEHOβ = 1.5
Table 5. The results of the ablation experiments across different learning strategy stages.
Table 5. The results of the ablation experiments across different learning strategy stages.
Learn StrategyF3F12F15
CGTBestMeanStdBestMeanStdBestMeanStd
---2.70 × 10−54.68 × 10−45.14 × 10−48.091.13 × 1011.149.98 × 10−15.294.00
--2.40 × 10−52.53 × 10−43.27 × 10−40.008.094.629.98 × 10−19.98 × 10−11.45 × 10−6
--2.52 × 10−57.92 × 10−41.14 × 10−36.97 × 10−16.532.619.98 × 10−19.98 × 10−13.12 × 10−11
--1.80 × 10−53.24 × 10−42.81 × 10−40.001.13 × 1013.951.019.955.47
-2.18 × 10−52.59 × 10−42.51 × 10−40.001.513.109.98 × 10−19.98 × 10−11.91 × 10−15
-1.67 × 10−53.06 × 10−44.29 × 10−40.007.62 × 10−12.359.98 × 10−19.98 × 10−18.82 × 10−15
-1.34 × 10−53.09 × 10−43.91 × 10−40.007.705.409.98 × 10−11.10 × 10−13.06 × 10−5
1.00 × 10−51.47 × 10−42.28 × 10−40.000.000.009.98 × 10−19.98 × 10−11.14 × 10−15
Table 6. Comparative experimental results of MLEHO, EHO, PSO, DE, and BA on benchmark functions.
Table 6. Comparative experimental results of MLEHO, EHO, PSO, DE, and BA on benchmark functions.
FunctionValuesMLEHOEHOPSODEBA
F1Best0.00001.6970 × 10−1102.12651.1197 × 10−25.6537
Mean1.5500 × 10−3089.5496 × 10−1084.42232.2796 × 10−26.9105
Std0.00001.9269 × 10−1071.59546.2757 × 10−36.6097 × 10−1
F2Best7.2322 × 10−1917.9017 × 10−571.40381.7371 × 10−21.1174 × 101
Mean2.1053 × 10−1642.4313 × 10−553.31392.5421 × 10−21.3362 × 104
Std0.00002.9481 × 10−551.79064.4395 × 10−35.0615 × 104
F3Best1.0012 × 10−52.7023 × 10−57.6850 × 10−35.7711 × 10−22.5079 × 101
Mean1.4740 × 10−44.6831 × 10−43.8607 × 10−28.2219 × 10−23.8788 × 101
Std2.2818 × 10−45.1406 × 10−41.7046 × 10−21.6593 × 10−25.7377
F4Best0.00004.6161 × 10−1041.8895 × 1077.5477 × 1045.2843 × 107
Mean1.4950 × 10−3141.9292 × 10−1014.0025 × 1071.3437 × 1056.0541 × 107
Std0.00004.6039 × 10−1011.6146 × 1074.3267 × 1044.7088 × 106
F5Best0.00003.8793 × 10−1111.2191 × 1021.9270 × 1043.3688 × 101
Mean7.5122 × 10−3141.9118 × 10−1082.9375 × 1022.9996 × 1042.2235 × 102
Std0.00003.7564 × 10−1081.0908 × 1025.0858 × 1031.8192 × 102
F6Best0.00009.1896 × 10−1118.5868 × 1025.7756 × 1041.3253 × 104
Mean1.1799 × 10−3059.6765 × 10−1082.1028 × 1037.2750 × 1042.0132 × 104
Std0.00002.1050 × 10−1071.8097 × 1039.3546 × 1034.5123 × 103
F7Best0.00007.3512 × 10−1061.7295 × 1051.1470 × 1021.3807 × 107
Mean1.3212 × 10−3101.4153 × 10−1031.8407 × 1061.7622 × 1022.6088 × 107
Std0.00002.7048 × 10−1031.5937 × 1063.9419 × 1019.2878 × 106
F8Best−1.2568 × 104−3.9652 × 103−7.5649 × 103−1.2326 × 104−8.5592 × 103
Mean−9.3362 × 103−2.7263 × 103−6.1176 × 103−1.1561 × 104−7.4457 × 103
Std1.1065 × 1035.9078 × 1021.0320 × 1034.8156 × 1026.6904 × 102
F9Best0.00008.0295 × 10−11.8460 × 1014.4456 × 1012.7775 × 102
Mean0.00004.36093.7439 × 1016.0722 × 1013.1278 × 102
Std0.00002.92091.1436 × 1017.83302.7631 × 101
F10Best4.4409 × 10−164.4409 × 10−163.12172.3130 × 10−23.6032
Mean4.4409 × 10−164.4409 × 10−164.24204.8829 × 10−21.4224 × 101
Std0.00000.00007.0286 × 10−11.0573 × 10−26.8082
F11Best0.00000.00009.1820 × 10−12.1795 × 10−21.5046 × 101
Mean0.00001.3897 × 10−51.03308.8885 × 10−26.3517 × 101
Std0.00004.2829 × 10−54.3070 × 10−22.9399 × 10−22.0619 × 101
F12Best0.00008.09497.71816.86601.1210 × 101
Mean0.00001.1348 × 1019.47767.77061.2980 × 101
Std0.00001.14398.5858 × 10−13.5667 × 10−15.1536 × 10−1
F13Best3.8183 × 10−43.8183 × 10−43.9372 × 10−11.3793 × 10−36.4168 × 10−1
Mean3.8183 × 10−43.8183 × 10−48.1780 × 10−13.1684 × 10−33.7756 × 101
Std0.00000.00002.7757 × 10−11.3144 × 10−31.1362 × 102
F14Best9.8874 × 10−48.0352 × 10−45.0017 × 10−23.1518 × 10−21.0317 × 10−1
Mean1.5493 × 10−22.0076 × 10−21.65541.9502 × 10−19.2393 × 10−1
Std1.3731 × 10−21.1683 × 10−24.93381.3429 × 10−14.3524 × 10−1
F15Best9.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−1
Mean9.9800 × 10−15.28783.94431.04775.4220
Std1.1425 × 10−154.00143.51682.2227 × 10−13.0687
F16Best3.0749 × 10−47.9051 × 10−43.0749 × 10−45.5436 × 10−45.9708 × 10−4
Mean3.6995 × 10−41.8525 × 10−23.9914 × 10−48.3907 × 10−42.0127 × 10−3
Std2.0292 × 10−42.7709 × 10−22.8182 × 10−41.7939 × 10−44.3417 × 10−3
F17Best−1.0316−1.0316−1.0316−1.0316−1.0316
Mean−1.0316−1.0314−1.0316−1.0316−1.0316
Std1.5282 × 10−164.4321 × 10−44.8948 × 10−152.2204 × 10−161.0642 × 10−3
F18Best3.9789 × 10−13.9789 × 10−13.9790 × 10−13.9789 × 10−13.9789 × 10−1
Mean3.9789 × 10−15.2549 × 10−15.9041 × 10−13.9789 × 10−13.9842 × 10−1
Std0.00002.6461 × 10−12.8257 × 10−11.9860 × 10−155.1603 × 10−4
F19Best−3.8628−3.8625−3.8535−3.8628−3.8545
Mean−3.8628−3.7865−3.7239−3.8628−3.8380
Std1.4154 × 10−156.0074 × 10−29.0524 × 10−22.2781 × 10−151.3432 × 10−2
F20Best1.3498 × 10−311.0022 × 10−95.4601 × 10−121.3498 × 10−319.6707 × 10−5
Mean5.2765 × 10−304.1273 × 10−37.3170 × 10−171.3498 × 10−311.5140 × 10−3
Std1.8721 × 10−197.4031 × 10−31.3439 × 10−160.00001.9446 × 10−3
Table 7. Classic benchmarks for data classification prediction problem [54].
Table 7. Classic benchmarks for data classification prediction problem [54].
NumberDatasetSamplesFeatureCategoriesDistribution
1Iris1504350, 50, 50
2Wine17813359, 71, 48
3Thyroid21553150, 35, 30
4Seeds2107370, 70, 70
5WBC68392444, 239
6Jain37322276, 97
7Cancer68392444, 239
Table 8. Data classification prediction accuracy for training point set.
Table 8. Data classification prediction accuracy for training point set.
DatasetValuesMLEHO-BPCEHO-BPCPSO-BPCDE-BPCBA-BPCAdam-BPCNadam-BPCLookahand-BPC
IrisMax100.00%97.62%95.35%97.67%97.67%99.21%98.45%97.62
Mean98.89%93.46%91.46%94.00%91.10%97.67%87.10%89.58%
Std3.90 × 10−33.79 × 10−24.49 × 10−22.06 × 10−24.84 × 10−28.90 × 10−31.11 × 10−11.00 × 10−1
WineMax100%93.42%91.45%88.82%94.08%89.54%94.77%96.08%
Mean99.25%86.05%87.93%82.77%86.24%63.41%82.04%83.66%
Std5.46 × 10−33.94 × 10−24.19 × 10−23.92 × 10−24.95 × 10−22.98 × 10−11.28 × 10−11.51 × 10−1
ThyroidMax99.46%89.73%96.20%92.43%95.68%99.46%96.22%96.22%
Mean98.76%84.11%84.96%90.78%87.59%81.96%89.87%93.73%
Std6.30 × 10−33.01 × 10−25.03 × 10−21.08 × 10−25.29 × 10−21.59 × 10−18.82 × 10−23.81 × 10−2
SeedsMax97.22%92.78%94.44%90.00%95.56%92.78%91.67%91.67%
Mean95.48%87.78%86.51%87.22%88.73%57.06%79.13%80.16%
Std1.44 × 10−24.05 × 10−26.38 × 10−21.78 × 10−29.65 × 10−22.99 × 10−11.12 × 10−11.01 × 10−1
WBCMax98.80%96.58%97.10%97.10%96.75%97.27%96.76%96.93%
Mean98.17%95.27%96.39%96.22%95.19%88.37%95.32%95.24%
Std3.85 × 10−31.45 × 10−24.78 × 10−36.22 × 10−31.07 × 10−21.60 × 10−11.23 × 10−21.76 × 10−2
JainMax100%95.92%96.24%97.51%96.56%100%92.50%92.81%
Mean99.64%94.51%93.66%96.07%95.31%96.29%91.02%91.56%
Std5.38 × 10−31.55 × 10−21.47 × 10−28.56 × 10−31.05 × 10−21.72 × 10−29.30 × 10−31.00 × 10−2
CancerMax98.63%96.93%96.58%97.09%96.93%98.46%96.56%96.59%
Mean98.12%95.26%95.95%96.34%95.83%93.12%95.10%95.83%
Std4.57 × 10−32.62 × 10−27.95 × 10−35.50 × 10−37.87 × 10−31.20 × 10−11.36 × 10−24.70 × 10−3
Table 9. Data classification prediction accuracy for testing point set.
Table 9. Data classification prediction accuracy for testing point set.
DatasetValuesMLEHO-BPCEHO-BPCPSO-BPCDE-BPCBA-BPCAdam-BPCNadam-BPCLookahand-BPC
IrisMax100%95.83%95.83%95.24%95.24%100%100%100%
Mean96.60%89.20%90.56%91.33%86.05%96.84%86.99%91.92%
Std3.33 × 10−27.15 × 10−23.72 × 10−23.03 × 10−27.89 × 10−23.22 × 10−21.70 × 10−19.84 × 10−2
WineMax100%96.15%92.31%92.00%92.00%96.00%100%100%
Mean96.07%82.00%84.22%79.25%83.74%62.06%79.73%81.63%
Std2.97 × 10−27.52 × 10−26.79 × 10−27.63 × 10−24.78 × 10−22.66 × 10−11.58 × 10−11.88 × 10−1
ThyroidMax100%90.00%90.32%93.33%93.33%100%100%96.67%
Mean97.69%81.96%83.28%87.92%84.69%83.47%88.71%91.16%
Std2.22 × 10−25.32 × 10−24.29 × 10−23.72 × 10−26.57 × 10−21.48 × 10−19.87 × 10−28.32 × 10−2
SeedsMax100%90.00%93.33%93.33%100%93.33%96.67%100%
Mean92.38%82.38%83.81%86.19%84.76%57.14%81.43%82.38%
Std6.10 × 10−24.95 × 10−27.65 × 10−23.30 × 10−21.01 × 10−12.99 × 10−11.65 × 10−11.17 × 10−1
WBCMax100%97.94%97.94%97.94%95.92%98.97%99.01%100%
Mean96.64%94.44%94.14%95.62%93.12%88.24%95.29%96.19%
Std1.79 × 10−22.25 × 10−22.58 × 10−21.77 × 10−21.96 × 10−21.57 × 10−13.31 × 10−23.42 × 10−2
JainMax100%94.44%96.23%98.11%96.23%100%100%96.23%
Mean98.93%93.56%92.21%94.38%93.29%96.30%93.58%93.83
Std1.36 × 10−21.74 × 10−22.40 × 10−22.65 × 10−22.22 × 10−24.46 × 10−24.02 × 10−22.37 × 10−2
CancerMax98.98%96.91%97.94%96.91%97.94%97.94%98.98%98.98%
Mean96.78%93.71%94.30%95.16%93.26%92.06%95.19%97.51%
Std1.15 × 10−22.46 × 10−22.61 × 10−22.16 × 10−22.38 × 10−21.47 × 10−13.65 × 10−21.77 × 10−2
Table 10. Ti6Al4V single-track classification prediction dataset.
Table 10. Ti6Al4V single-track classification prediction dataset.
DatasetSamplesFeatureCategoriesNormalKeyholeNo-Continuous
Ti6Al4V4417327312246
Table 11. Ti6Al4V process classification accuracy for training point set.
Table 11. Ti6Al4V process classification accuracy for training point set.
DatasetValuesMLEHO-BPCEHO-BPCPSO-BPCDE-BPCBA-BPCAdam-BPCNadam-BPCLookahand-BPC
Ti6Al4VMax99.47%97.35%97.62%94.44%95.77%97.62%97.35%97.35%
Mean98.26%87.57%90.67%91.57%89.00%94.37%96.22%95.54%
Std8.20 × 10−35.81 × 10−25.32 × 10−22.08 × 10−23.17 × 10−23.32 × 10−28.30 × 10−33.01 × 10−2
Table 12. Ti6Al4V process classification accuracy for testing point set.
Table 12. Ti6Al4V process classification accuracy for testing point set.
DatasetValuesMLEHO-BPCEHO-BPCPSO-BPCDE-BPCBA-BPCAdam-BPCNadam-BPCLookahand-BPC
Ti6Al4VMax98.41%96.83%96.83%93.65%92.06%98.41%98.41%98.41%
Mean96.60%85.71%89.34%89.80%88.44%94.10%96.15%93.65%
Std2.32 × 10−28.04 × 10−24.27 × 10−22.73 × 10−23.27 × 10−24.82 × 10−23.16 × 10−25.26 × 10−2
Table 13. Comprehensive evaluation metrics in different comparative classifiers.
Table 13. Comprehensive evaluation metrics in different comparative classifiers.
ValuesValuesMLEHO-BPCEHO-BPCPSO-BPCDE-BPCBA-BPCAdam-BPCNadam-BPCLookahand-BPC
PrecisionClass 197.2% ± 2.0%83.2% ± 7.0%89.3% ± 6.4%90.1 ± 4.2%85.8% ± 5.4%91.5% ± 4.4%94.5% ± 5.2%95.1% ± 3.3%
Class 298.2% ± 4.4%97.7% ± 2.7%97.7% ± 2.8%94.3% ± 4.2%98.0% ± 2.4%97.6% ± 2.7%97.9% ± 2.5%98.0% ± 3.1%
Class 397.0% ± 4.8%38.1% ± 45.2%0.536 ± 47.1%83.3% ± 34.5%14.3% ± 35.0%71.4% ± 45.2%85.7% ± 35.0%94.3% ± 14.0%
RecallClass 198.2% ± 2.3%98.1% ± 1.8%97.4% ± 2.7%97.1% ± 2.1%97.5% ± 2.0%97.7% ± 2.6%98.0% ± 1.9%97.9% ± 2.0%
Class 297.5% ± 2.9%78.3% ± 28.8%93.3% ± 5.8%92.6% ± 9.1%95.0% ± 8.0%96.8% ± 3.8%97.2% ± 5.5%97.3% ± 3.3%
Class 389.1% ± 10.3%31.0% ± 40.3%43.2% ± 45.6%54.1% ± 34.7%14.3% ± 35.0%51.8% ± 35.6%71.2% ± 32.5%79.7% ± 17.8%
Macro-F1-0.959 ± 0.020.687 ± 0.120.773 ± 0.150.827 ± 0.110.673 ± 0.130.837 ± 0.130.902 ± 0.110.93 ± 0.049
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, S.; Chen, H.; He, M.; Ge, Z.; Ni, R.; Liang, X. Neural Network Classifier for Ti6Al4V Selective Laser Melting Process Classification via Elephant Herding Optimization with Multi-Learning. Appl. Sci. 2026, 16, 1746. https://doi.org/10.3390/app16041746

AMA Style

Xu S, Chen H, He M, Ge Z, Ni R, Liang X. Neural Network Classifier for Ti6Al4V Selective Laser Melting Process Classification via Elephant Herding Optimization with Multi-Learning. Applied Sciences. 2026; 16(4):1746. https://doi.org/10.3390/app16041746

Chicago/Turabian Style

Xu, Siwen, Hanning Chen, Maowei He, Zhaodi Ge, Rui Ni, and Xiaodan Liang. 2026. "Neural Network Classifier for Ti6Al4V Selective Laser Melting Process Classification via Elephant Herding Optimization with Multi-Learning" Applied Sciences 16, no. 4: 1746. https://doi.org/10.3390/app16041746

APA Style

Xu, S., Chen, H., He, M., Ge, Z., Ni, R., & Liang, X. (2026). Neural Network Classifier for Ti6Al4V Selective Laser Melting Process Classification via Elephant Herding Optimization with Multi-Learning. Applied Sciences, 16(4), 1746. https://doi.org/10.3390/app16041746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop