Least Squares Boosting Ensemble and Quantum-Behaved Particle Swarm Optimization for Predicting the Surface Roughness in Face Milling Process of Aluminum Material

: Surface roughness is a signiﬁcant factor in determining the product quality and highly impacts the production price. The ability to predict the surface roughness before production would save the time and resources of the process. This research investigated the performance of state-of-the-art machine learning and quantum behaved evolutionary computation methods in predicting the surface roughness of aluminum material in a face-milling machine. Quantum-behaved particle swarm optimization (QPSO) and least squares gradient boosting ensemble (LSBoost) were utilized to simulate numerous face milling experiments and have predicted the surface roughness values with high extent of accuracy. The algorithms have shown a superior prediction performance over genetics optimization algorithm (GA) and the classical particle swarm optimization (PSO) in terms of statistical performance indicators. The QPSO outperformed all the simulated algorithms with a root mean square error of RMSE = 2.17% and a coefﬁcient of determination R 2 = 0.95 that closely matches the actual surface roughness experimental values.


Introduction
Machining is the most significant features of any production action. Involving several machining procedures, milling is broadly utilized procedure to create compound geometries in several applications of dies as well as molds, turbine rotors, etc. [1]. However, the milling procedure is the most commonly utilized machining procedure in manufacturing sector. The major aim of this process is to produce high-quality parts within reasonable period of time with high surface quality. Surface roughness of any machining procedure has become essential due to the increased quality demands, and still there are probabilities of refusing the element for the absence of essential surface finish, even if the component's dimensions are good in the dimensional tolerance. It is a significant measure of the product's quality along with also significantly effects the production price and mostly dependent on numerous parameters for example cutting speed, tool nomenclature, feed, cutting force, inflexibility and depth of cut of the machine [2]. The conventional method in choosing the machining parameters is based on trial and error and the expert knowledge using the machining handbooks. This method is time consuming and has an exhausting procedure. A human procedure organizer chooses appropriate machining procedure parameters by employing his/her own experience or machining tables. In many cases, the preferred parameters are conventional, as well as away from optimum. Therefore, it becomes essential to introduce a vigorous method to forecast the machining parameters prior to machining to acquire the required product's surface roughness in the least possible machining time [3]. There are numerous researches outlining intelligent methods that were considered for the optimization of the machining procedure: for example, applied quadratic programming [4], nonlinear programming [5], sequential programming [6], goal programming [7] and dynamic programming [8], to resolve the issues by developing it as a multi objective function model. The optimization issue was additionally resolved through introducing several non-traditional optimization approaches, which include genetic algorithms (GAs) [9,10], particle swarm optimization (PSO) [11,12], scatter search (SS) [13,14], ant colony optimization (ACO) [15,16], fuzzy-logic-based expert systems [17] and differential evolution (DE) [18]. Hybrid optimization approach built on difference evolution algorithm, as well as Taguchi's method, is established by Yildiz [19]. The use of hybrid strategy is utilized in the studies to optimize machining parameters in multipass turning activities. Rao et al. [20] proposed an advanced algorithm known as teaching-learning-based optimization (TLBO) algorithm, that was utilized for the machining parameters optimization of preferred modern machining procedures. Rao et al. [21] presented the MO-Jaya (Multi-objective Jaya) algorithm to optimize the abrasive waterjet machining process and compared the outcomes with well-known optimization algorithms. Alajmi et al. [22] introduced quantum based optimization method (QBOM) to resolve the surface-grinding-process optimization. Quantum-based optimization method's performance is examined against two tests; the first is a rough grinding procedure and the second is a finished grinding procedure. Yildiz and Solanki [23] used the latest hybrid optimization technique (HPSI), according to the particle swarm algorithm, as well as receptor modifying the immune system's property for multi-objective crashworthiness optimization of a full vehicle model along with milling optimization issues. Baraheni and Amini [24] utilized ANOVA (analysis of variance) to figure out the effect of drilling, as well as material parameters comprising cutting velocity, feed rate, ultrasonic vibration and plate thickness on thrust force, along with delamination. Bustillo et al. [25] proposed a strategy to avoid the limitations of high experimental costs in relative to dataset size and to accomplish the smart manufacturing patterns of the friction-drilling process. The extension of the introduced methodology to other datasets from the friction drilling process will help to detect the most accurate machine learning algorithm for this industrial task. The results on this dataset showed that the AdaBoost ensembles offered the highest accuracy and were more easily optimized than artificial neural networks. Sanchez et al. [26] used a hybrid computer-integrated system for the development of the accuracy of corner cutting that combines experimental knowledge of the process and numerical simulation. The system accepts the user to choose the optimum cutting strategy, either by wire path modification or by cutting regime modification (when high accuracy is required). The validity of the system has been verified through a series of case studies, which demonstrate the improvements in accuracy and productivity with respect to the normally used strategies.
This research contributes in assessing the accuracy of state-of-the-art machine learning and quantum evolutionary optimization algorithm in predicting the surface roughness values in a face-milling process. Ensemble learning based on least squares gradient boosting (LSBoost) and quantum-behaved particle swarm optimization (QPSO) are investigated and assessed in terms of statistical performance indicators, to predict the surface roughness with a high extent of accuracy. Such prediction would allow the process operator to save time and labor and achieve the required production quality.

Face-Milling Mathematical Model
The mathematical model is expressed according to the machining parameter's influence on the workpiece's surface roughness. The machining parameters involved in the process are accomplished from the data of the machine limitations. The mathematical model of the milling procedure offered in the literature [2] is used in the present study.

Objective Function
The purpose of prediction the machining parameters in the face-milling procedure is to attain the preferred workpiece's surface-roughness values from least machining time. According to the machining parameter's influence on the reactions, the surface-roughness equations were expressed as demonstrated in Equations (1) and (2).
Subsequently the measured cutting speed's range is quite broad, so a single equation might not suit the forecast surface roughness for the whole series. Therefore, Equations (1) and (2) were expressed to fit every situation by minimal errors. The experiential constants in Equations (1) and (2) can be discovered, depending on the machining parameter's influence on machining time, as well as surface roughness. The ordinary equations for certain machining time are provided in Equations (3) and (4).
where ∆ is over travel = 2.5 mm, L denotes the milled surface's length in mm and Y denotes the cutter approach length in mm.

Constraints
There are numerous constraints which occur in the real machining status for the optimization of the objective purpose. A specified depth of cut and feed rate, along with cutting speed, are selected, and consideration is given to the struggle among the surface roughness and machining time. The boundaries of these parameters are presented in Table 1 and are examined for optimizing the machining parameters and defined as follows. Particle swarm optimization (PSO) algorithm is a population-based swarm intelligence optimization algorithm that was introduced by Kennedy and Eberhert [27] and has been widely used by scientists in various applications. PSO algorithm was inspired by the flock of birds and fish schools. When compared to other evolutionary algorithms, such as bacterial foraging algorithm (BFA), it can be observed that PSO is simpler to implement and more computationally efficient in converging to optimal solutions [28]. Moreover, PSO is robust to control parameters and easy to implement in various software packages [29].
The implementation of PSO algorithm starts by distributing a population of particles within the defined search space. These particles move towards the optimal solution by computing the objective function at each iteration. Each particle then computed the personal best (pbest) solution and was then compared with all the particles. The global best (gbest) solution can then be determined at each iteration until the termination criterion is met and the optimal solution is defined.
The position and velocity of each particle is computed and updated at each iteration, according to specific equations that corresponds to the values of personal best and global best solutions. The following are the set of equations that correspond to each step in the pseudocode of Algorithm 1 of the classical PSO algorithm. In Step 1, let the position of the i-th particle in the d-dimensional search space be represented as follows: Similarly, the velocity array can be represented as follows: During each iteration, the fitness of each particle is evaluated and compared with its personal best array of solutions; if the current fitness is better than the previous fitness values, the current personal best solution is updated with the new fitness value and added to the array of personal best solutions, which can be defined as follows: In Step 4 of the PSO pseudocode, the global best solution is determined at each iteration among all the fitness values of the whole population, and the array of global best solutions is defined as follows: The particle's velocity and position are updated in Equations (9) and (10), respectively, as follows: Algorithm 1 Classical PSO algorithm Step 1: Setting population size and random initialisation of particle positions and velocities.
Step 2: Evaluation of particles fitness according to required objective function Step 3: Evaluation of personal best solution of each particle Step 4: Evaluation of global best solution Step 5: Velocity update Step 6: Position update Step 7: Repeat Steps 2-6 until termination criteria met.
The PSO algorithm will be terminated once the termination criterion is met; whether it is a maximization or minimization problem, the terminal criterion is usually the number of iterations that is defined by the user. The selection of PSO control parameters is a critical step that can negatively impact the optimization process and may result in premature convergence and narrow down the search diversity and local maxima traps [29]. Several studies, such as in References [29][30][31], have investigated the selection of PSO parameters in terms of the population size, social coefficient, cognitive coefficient, maximum particle velocity and the maximum iterations.

Quantum-Behaved Particle Swarm Optimization Algorithm (QPSO)
In the literature, there exist many variations of PSO, such as quantum-behaved PSO, deep learning driven-PSO, hybrid PSO-Bacterial foraging optimization (BFO) and PSO-Grey Wolf Optimizer (GWO) hybrid approach [32][33][34][35]. PSO variations aimed at providing faster convergence rates, avoiding local maxima traps and parameter selection rules for better performance of the PSO. In this paper, we focus on utilizing the quantum-behaved PSO (QPSO) presented by Sun et al. [36] that has fewer parameters to control and outperforms the original PSO algorithm.
In classical PSO, the particles' positions and velocities are calculated at each iteration step and updated to diversify the search space and converge towards the optimal solution. Thus, the trajectory of the movement of particles within the search space is deterministic. However, in quantum mechanics, and according to Heisenberg's uncertainty principle, the velocity and the position of the particle cannot be determined simultaneously [37] and the state of the particle is described by Schrödinger's wave function ψ(x,t). By solving Schrödinger's equation to get the probability density function of the particles location in the space, and using Monte Carlo simulation, the position of movement of the particle can be presented as follows: where we have the following: Mbest is the Mainstream thought or mean best value, x i,j (t + 1) the position of the i-th particle in the j-th dimension of the space, u and k are the uniform probability distribution parameters in the range [0, 1], β denotes the contraction-expansion coefficient and the p i as the local attractor point.
The mean optimal value is described as the mean of each personal greatest of the population and can be evaluated as follows: In which g represents the best particle's index in the population. In addition, the local attractor, p i , guarantees the convergence of the algorithm [37] and is defined as follows: where p k,i and p g,i represent the pbest and gbest, respectively. The pseudocode of the QPSO algorithm is as presented in Algorithm 2.

Algorithm 2 QPSO algorithm
Step 1: Setting population size and random initialisation of particle positions and velocities.
Step 2: Evaluation of particles fitness according to required objective function Step 3: Evaluation of personal best solution of each particle Step 4: Evaluation of global best solution Step 5: Calculating of Mean best (Mbest) of all the pbest of the population Step 6: Position update Step 7: Repeat Steps 2-6 until termination criteria met.

Least Squares Boosting Ensemble (LSBoost)
The gradient boosting ensemble method consists of a finite set of weak learners and a meta learner that assigns weights to each learner and combines the predictive results of each one, using voting methods to provide a better predictive performance for regression problems. Boosting is a supervised machine learning method where the data are split into training, validation and testing sets. The algorithm starts by training the individual weak learners, sequentially, that are in the form of decision trees, and fits the residual of errors to achieve a better performance. The LSBoost method utilizes the least squares as the loss criteria. The LSBoost pseudocode is presented in Algorithm 3, as presented by Friedman [38]. and F m (x) as the regression function. Initialization: F 0 (x) = y For m = 1 to M do: To assess the prediction accuracy, various performance indicators are utilized as the root mean square error (RMSE), mean absolute error (MAE), coefficient of variation of root mean square error (CVRMSE), mean absolute percentage error (MAPE) and the coefficient of determination, R 2 , as defined in Equations (14)-(18) respectively.

Results
The QPSO algorithm was executed in the MATLAB software package, to optimize the machining time for a desired surface roughness of aluminum material in the millingmachine system introduced previously. The QPSO parameters utilized in the simulation are presented in Table 2 and were selected based on heuristic tuning and setting the cognitive acceleration and social coefficients with equal values, as recommended by some studies in the literature [27,39,40]. Meanwhile, the parameters of the LSBoost algorithm are presented in Table 3. In addition, the simulation parameters used in Matlab for the PSO and GA algorithms are presented in Tables 4 and 5, respectively.

Parameter Value
Number of learners 50 Learning rate 0.01 Minimum leaf size 1 Table 4. PSO parameters.

Parameter Value
Iterations 1000 Particles population 100 Cognitive acceleration: c 1 2 Social coefficient: c 2 2 A total of 36 pilot experiments were executed, to confirm the computational results of the QPSO and LSBoost algorithms, and compared with the performance of PSO and GA algorithms, which have been extensively reported in the literature, for the milling machine to accomplish the required surface roughness with the minimum machining time, as presented in Table 6.
The results obtained by the QPSO for the machining parameters show the ability of the QPSO algorithm to achieve minimum machining time in comparison to the experimental results for the required surface roughness with a high extent of accuracy. Table 7 provides the statistical performance indicators of each algorithm in terms of the MAPE, RMSE, MAE and R 2 values. It can be noted that QPSO has achieved the best accuracy in predicting the surface roughness values with RMSE of 2.17% and a high coefficient of determination value of R 2 = 0.95. The high value of R 2 clearly indicates that the predicted values closely match the actual experimental values of the surface roughness and is evident of the superior performance of the QPSO, due to the algorithm's superiority in exploring the search space and avoiding local optima trap. The LSBoost provides the second best prediction values with an RMSE of 3.74% and R 2 of 0.88. This can be addressed to the gradient ensemble in combining different weak learners into a meta learner that provide the best prediction at each step. The GA algorithm resulted in a relatively comparable performance with LSBoost algorithm in terms of the coefficient of determination with R 2 = 0.871. However, it has a slightly higher RMSE value of 4.86%, which is higher than the LSBoost algorithm by 1.12%. On the other hand, the PSO algorithm resulted in an RMSE of 4.99% and the lowest R 2 value of 0.84.    In addition, Figure 3 illustrates the absolute error percentage of each data point predicted by the algorithms, to assess the performance of each algorithm at each experiment. The highest error percentage can be observed at the PSO predicted values, with maximum error values of 28%. The LSBoost resulted in an overshoot at some prediction data points, where the maximum error values reached 45% at Experiment 27. However, the overall error values of the LSBoost can be observed within the range of 25%. In addition, the GA algorithm resulted in a comparable performance with the LSBoost algorithm. The QPSO algorithm resulted in a promising performance with maximum error values of approximately 20%.

Conclusions
This paper presented an investigation of the accuracy of state-of-the-art machinelearning and quantum-behaved evolutionary algorithms in predicting the surface roughness of a face-milling machine. Ensemble learning based on least-square-gradient boosting and quantum-behaved particle-swarm optimization algorithm were utilized to predict the surface-roughness values, given the machining parameters of a face milling machine. The performance of these algorithms was compared with other existing methods in the literature, specifically the genetics algorithm and the particle swarm optimization, and have resulted in a better prediction performance. The accuracy of the algorithms was assessed based on the RMSE, MAPE, MAE, CVRMSE and R 2 values of each algorithm. QPSO achieved the best prediction accuracy in terms of its performance indicators with an RMSE of 2.17%, MAE of 1.59% and R2 of 0.95. LSBoost achieved the second-best prediction performance with RMSE of 3.74%, MAE of 2.92% and R 2 of 0.88. It is evident that QPSO and LSBoost have resulted in a high extent of accuracy in predicting the surface roughness in the face-milling machine. Moreover, utilizing such methods in predicting the surface roughness before the production would save a lot of resources in terms of expenses, labor and time.  Abbreviations v min minimum spindle speed (rpm) v max maximum spindle speed (rpm) f min minimum feed rate (mm/rev) f max maximum feed rate (mm/rev) a min minimum depth of cut (mm) T m machining time (s) R a surface roughness (µm) L length of the work piece (mm) B width of the work piece (mm) a max maximum depth of cut (mm)