# A Comparative Study between Regression and Neural Networks for Modeling Al6082-T6 Alloy Drilling

^{1}

^{2}

^{*}

## Abstract

**:**

_{z}) and cutting torque (M

_{z}) is assessed based on several criteria. From the analysis, it was found that the MLP models were superior to the other neural networks model and the regression model, as they were able to achieve a relatively lower prediction error for both models of F

_{z}and M

_{z}.

## 1. Introduction

## 2. Artificial Neural Networks

_{i}are the predicted values and A

_{i}the actual values.

_{i}the center of the radial basis function. Apart from the widely-used MLP model and RBF-NN models, various other types of neural networks exist, as well as hybrid models, combining ANN with other soft computing methods such as fuzzy logic. In addition to the learning and optimization ability of the neural networks, fuzzy logic systems are able to provide a means for reasoning to the system with the use of IF-THEN rules. ANFIS is a soft computing method with characteristics related both to neural networks and fuzzy logic. Generally, the creation of an ANFIS model has several similarities to the creation of a MLP model, but there exist also some basic differences. This model has a structure similar to the MLP structure, but there are some additional layers, related to membership functions of the Fuzzy Inference System (FIS). One of the most common fuzzy models used in ANFIS models is the Sugeno fuzzy model. By the training process, fuzzy rules are formed, and an FIS is created. At first, the input data are converted into fuzzy sets through a fuzzification layer, and then, membership functions are generated. Membership functions can have various shapes such as triangular, trapezoid, or Gaussian, among others. In the second layer, the number of nodes is equal to the number of fuzzy rules of the system; the determination of the number of fuzzy rules is conducted using a suitable method such as the subtractive clustering method. The output of the second layer nodes is the product of the input signals, which they receive from the previous nodes. Then, in the third layer, the ratio of each rule’s firing strength to the sum of the firing strengths of all rules is calculated, and it is used in the defuzzification layer to determine the output values from each node. Finally, the total output is calculated by summation of the outputs of the fourth layer. The FIS parameters are updated by means of a learning algorithm in order to reduce the prediction error, such as in the case of MLP model.

## 3. Methods and Materials

#### 3.1. Technics of Experiments

_{z}and torque M

_{z}values) were recorded with a Kistler dynamometer Type 9123 connected to a three-channel charge amplifier with a data acquisition system. Dynoware software (Type 2825D-02) was used in order to monitor and record the experimental values of the cutting forces, as shown in Figure 1.

#### 3.2. Methodology of Results Analysis

^{TM}software was employed. For each method, only one characteristic, mainly related to the architecture of the network, was varied, and the other settings were assumed to be equal to the default settings of the software. For the MLP and RBF-NN, the number of hidden neurons was varied, and for the ANFIS, where the number of inner nodes cannot be altered in the same way as in the other two types of networks, the number of clusters used for the data, using the fuzzy c-means clustering method, which directly affects the number of parameters of the system, was varied. In all cases, the lower and upper boundaries for the variable parameters were chosen in a way that it did not lead to extreme overfitting; the analysis conducted was able to determine exactly the limit values of the variable parameters, which can lead to a properly-functioning model.

_{inp}is the number of input neurons, N

_{out}the number of output neurons, and α is a number between zero and 10. In the present case, N

_{inp}equals to three, i.e., drill diameter, cutting speed, and feed, and N

_{out}is one, i.e., thrust force or cutting torque. Thus, according to this formula, the lower and upper boundaries for the number of hidden neurons in the present study was between two and 12. However, in the present study, an investigation of the optimum value of hidden neurons was conducted by taking into consideration the performance of the models regarding not only MSE, but also MAPE, both for the training and testing dataset, as it is required to achieve a low value of error for the training data, but also to prevent the network from overfitting, something that is indicated by the error in the testing dataset. For the MLP and RBF-NN, the number of hidden neurons is varied between one and eight, a choice that is comparable to the suggestions offered by Equation (4). Each network was trained 50 times with the same settings in order to overcome the effect of the random initialization of weights. For the ANFIS models, the parameter that was varied was the number of clusters, in the range of 2–4. As with the other two types of networks, the training was repeated 50 times, and the results regarding MSE and MAPE for training and testing data are presented.

_{z}and torque M

_{z}. In Figure 4, a flowchart presenting graphically the procedure followed in the present study is depicted.

## 4. Results and Discussion

#### 4.1. Result of Experiment

_{z}and M

_{z}were modelled separately using polynomial mathematical models.

#### 4.2. Multiple Regression Models

_{i}stands for the coded values, and bi stand for the models’ regression coefficients.

_{z}, these factors are: D (p-value = 0.026), V (p-value = 0.002), V*V (p-value = 0.000), and D*f (p-value = 0.005), while for M

_{z}, the significant terms are: D (p-value = 0.000), D*D (p-value = 0.000), V*V (p-value =0.046), and D*f (p-value = 0.000). Moreover, from Table 3 and Table 4, it is evaluated that the coefficient of multiple determination was very close to unity for both cases (R

^{2}= 99.6% for F

_{z}and R

^{2}= 98.9% for M

_{z}), and the adjusted coefficient (R

^{2}adj) was 99.4% for F

_{z}and 98.3% for M

_{z}. All these statistical estimators indicate an appropriate RSM model with the degree of freedom and optimal architecture that can be used for predictive simulations of the reactive extraction. Of course, for both cases, only the significant terms can be included in order to present a set of simplified equations. The agreement between experimental and RSM predicted data is shown in Figure 6. Residual analysis was performed to test the models’ accuracy; in both cases, all points were positioned near to a straight line, indicating that RSM predicts the experimental data for the considered valid region well.

#### 4.3. Multi-Layer Perceptron Models

_{z}model were considerably larger than those of the F

_{z}model. Nevertheless, the results confirmed that the optimum number of neurons was dependent on the size of the dataset; as for the relatively small dataset of 27 samples, a model with six hidden neurons was sufficient.

#### 4.4. Radial Basis Function Neural Network Models

_{z}) cases, it was observed from Table 8 and Figure 10 that the optimum model was the one with eight hidden neurons, as it exhibited both a lower MSE and MAPE for the testing dataset. Thus, it can be stated that for RBF-NN models, a slightly larger number of hidden neurons was required compared to MLP models.

#### 4.5. Adaptive Neuro-Fuzzy Inference System Models

#### 4.6. Comparison of Regression and Neural Network Models

_{z}and M

_{z}and their values of the ANN and RSM models, respectively.

## 5. Conclusions

_{z}and M

_{z}models, respectively, and also low MAPE values, namely 0.86305% and 3.0495% for the F

_{z}and M

_{z}models, respectively.

_{z}and M

_{z}cases.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Nouari, M.; List, G.; Girot, F.; Gehin, D. Effect of machining parameters and coating on wear mechanisms in dry drilling of aluminum alloys. Int. J. Mach. Tools Manuf.
**2005**, 45, 1436–1442. [Google Scholar] [CrossRef] - Girot, F.; Gutierre-Orrantia, M.E.; Calamaz, M.; Coupard, D. Modeling and adhesion tool wear in dry drilling of aluminum alloys. AIP Conf. Proc.
**2011**, 1315, 1639–1644. [Google Scholar] - Farid, A.A.; Sharif, S.; Idris, M.H. Chip morphology study in high speed drilling of Al-Si alloy. Int. J. Adv. Manuf. Technol.
**2011**, 57, 555–564. [Google Scholar] [CrossRef] - Qiu, K.; Qin, S.; Ge, C.; Chen, M. A study of high-performance drills in the drilling of aluminum alloy and titanium alloy. Key Eng. Mater.
**2014**, 589–590, 163–167. [Google Scholar] [CrossRef] - Dasch, J.M.; Ang, C.C.; Wong, C.A.; Cheng, Y.T.; Weiner, A.M.; Lev, L.C.; Konca, E. A comparison of five categories of carbon-based tool coatings for dry drilling of aluminum. Surf. Coat. Technol.
**2006**, 200, 2970–2977. [Google Scholar] [CrossRef] - Kurt, M.; Kaynak, Y.; Bagci, E. Evaluation of drilled hole quality in Al2024 alloy. Int. J. Adv. Manuf. Technol.
**2008**, 37, 1051–1060. [Google Scholar] [CrossRef] - Kilickap, E. Modeling and optimization of burr height in drilling of Al-7075 using Taguchi method and response surface methodology. Int. J. Adv. Manuf. Technol.
**2010**, 49, 911–923. [Google Scholar] [CrossRef] - Sreenivasulu, R.; Rao, C.S. Effect of drilling parameters on thrust force and torque during drilling of aluminum 6061 alloy-based on Taguchi design of experiments. J. Mech. Eng.
**2016**, 46, 41–48. [Google Scholar] [CrossRef] - Efkolidis, N.; Garcia-Hernandez, C.; Huertas-Talon, J.L.; Kyratsis, P. Modelling and prediction of thrust force and torque in drilling operations of Al7075 using ANN and RSM Methodologies. Strojinski Vestn. J. Mech. Eng.
**2018**, 64, 351–361. [Google Scholar] - Kyratsis, P.; Markopoulos, A.; Efkolidis, N.; Maliagkas, V.; Kakoulis, K. Prediction of thrust force and cutting torque in drilling based on the response surface methodology. Machines
**2018**, 6, 24. [Google Scholar] [CrossRef] - Singh, A.K.; Panda, S.S.; Pal, S.K.; Chakraborty, D. Predicting drill wear using an artificial neural network. Int. J. Adv. Manuf. Technol.
**2006**, 28, 456–462. [Google Scholar] [CrossRef] - Umesh Gowda, B.M.; Ravindra, H.V.; Ullas, M.; Naveen Prakash, G.V.; Ugrasen, G. Estimation of circularity, cylindricity and surface roughness in drilling Al-Si3N4 metal matrix composites using artificial neural network. Procedia Mater. Sci.
**2014**, 6, 1780–1787. [Google Scholar] [CrossRef] - Neto, F.C.; Geronimo, T.M.; Cruz, C.E.D.; Aguiar, P.R.; Bianchi, E.E.C. Neural models for predicting hole diameters in drilling processes. Procedia CIRP
**2013**, 12, 49–54. [Google Scholar] [CrossRef] - Ferreiro, S.; Sierra, B.; Irigoien, I.; Gorritxategi, E. Data mining for quality control: Burr detection in the drilling process. Comput. Ind. Eng.
**2011**, 60, 801–810. [Google Scholar] [CrossRef] - Lo, S.P. An adaptive-network based fuzzy inference system for prediction of workpiece surface roughness in end milling. J. Mater. Process. Technol.
**2003**, 142, 665–675. [Google Scholar] [CrossRef] - Zuperl, U.; Cus, F.; Kiker, E. Adaptive network based inference system for estimation of flank wear in end-milling. J. Mater. Process. Technol.
**2009**, 209, 1504–1511. [Google Scholar] - Azarrang, S.; Baseri, H. Selection of dry drilling parameters for minimal burr size and desired drilling quality. Proc. Inst. Mech. Eng. E
**2017**, 231, 480–489. [Google Scholar] [CrossRef] - Fang, N.; Srinivasa Pai, P.; Edwards, N. Neural network modeling and prediction of surface roughness in machining aluminum alloys. J. Comput. Commun.
**2016**, 4, 66460. [Google Scholar] [CrossRef] - El-Mounayri, H.; Briceno, J.F.; Gadallah, M. A new artificial neural network approach to modeling ball-end milling. Int. J. Adv. Manuf. Technol.
**2010**, 47, 527–534. [Google Scholar] [CrossRef] - Tsai, K.M.; Wang, P.J. Comparisons of neural network models on material removal rate in electrical discharge machining. J. Mater. Process. Technol.
**2001**, 117, 111–124. [Google Scholar] [CrossRef] - Nalbant, M.; Gokkaya, H.; Toktas, I. Comparison of regression and artificial neural network models for surface roughness prediction with the cutting parameters in CNC turning. Model. Simul. Eng.
**2007**, 2007, 92717. [Google Scholar] [CrossRef] - Jurkovic, Z.; Cukor, G.; Brezocnik, M.; Brajkovic, T. A comparison of machine learning methods for cutting parameters prediction in high speed turning process. J. Intell. Manuf.
**2018**, 29, 1683–1693. [Google Scholar] [CrossRef] - Zhang, Z.H.; Yan, D.N.; Ju, J.T.; Han, Y. Prediction of the flow stress of a high alloyed austenitic stainless steel using artificial neural network. Mater. Sci. Forum
**2012**, 724, 351–354. [Google Scholar] [CrossRef]

**Figure 7.**Results regarding the MLP model for F

_{z}: (

**a**) MSE for training and testing datasets, (

**b**) MAPE for training and testing datasets.

**Figure 8.**Results regarding the MLP model for M

_{z}: (

**a**) MSE for the training and testing datasets and (

**b**) MAPE for the training and testing datasets.

**Figure 9.**Results regarding the RBF-NN model for F

_{z}: (

**a**) MSE for training and testing datasets and (

**b**) MAPE for training and testing datasets.

**Figure 10.**Results regarding the RBF model for M

_{z}: (

**a**) MSE for training and testing datasets and (

**b**) MAPE for training and testing datasets.

**Figure 11.**Results regarding the ANFIS model for F

_{z}: (

**a**) MSE for training and testing datasets and (

**b**) MAPE for training and testing datasets.

**Figure 12.**Results regarding the ANFIS model for M

_{z}: (

**a**) MSE for training and testing datasets and (

**b**) MAPE for training and testing datasets.

Chemical Composition | ||||||||

Elements | Mg | Cu | Cr | Fe | Si | Mn | Others | Al |

Percentage | 0.8 | 0.1 | 0.25 | 0.5 | 0.9 | 0.5 | 0.3 | Balance |

Mechanical Properties | ||||||||

Alloy | Temper | Thermal Conductivity | Coefficient of Thermal Expansion | Melting Point | Elastic Modulus | Electrical Resistivity | ||

W/m·K | K^{−1} | Min Max | GPa | Ω·m | ||||

Al6082 | T6 | 180 | 24 × 10^{−6} | 555 650 | 70 | 0.038 × 10^{−6} |

Factors | Notation | Levels | ||
---|---|---|---|---|

I | II | III | ||

Cutting speed (m/min) | V | 50 | 100 | 150 |

Feed rate (mm/rev) | f | 0.15 | 0.2 | 0.25 |

Tool diameters (mm) | D | 8 | 10 | 12 |

Source | Degree of Freedom | Sum of Squares | Mean Square | f-Value | p-Value |

Regression | 9 | 514.673 | 57.186 | 471.95 | 0.000 |

Residual Error | 17 | 2060 | 121 | ||

Total | 26 | 516.733 | |||

R-Sq(adj) = 99.4% | |||||

Predictor | Parameter Estimate Coefficient | Standard Error Coefficient | t-Value | p-Value | |

Constant | −221.9 | 151.4 | −1.47 | 0.161 | |

D | 57.59 | 23.60 | 2.44 | 0.026 | |

V | 1.9550 | 0.5455 | 3.58 | 0.002 | |

f | 1291.8 | 798.0 | 1.62 | 0.124 | |

D*D | −0.762 | 1.123 | −0.68 | 0.506 | |

V*V | −0.011660 | 0.001798 | −6.49 | 0.000 | |

f*f | −667 | 1798 | −0.37 | 0.715 | |

D*V | 0.05600 | 0.03178 | 1.76 | 0.096 | |

D*f | 103.25 | 31.78 | 3.25 | 0.005 | |

V*f | −1.203 | 1.271 | −0.95 | 0.357 |

Source | Degree of Freedom | Sum of Squares | Mean Square | f-Value | p-Value |

Regression | 9 | 37.8929 | 4.2103 | 171.77 | 0.000 |

Residual Error | 17 | 0.0924 | 0.0245 | ||

Total | 26 | 38.3096 | |||

R-Sq(adj) = 98.3% | |||||

Predictor | Parameter Estimate Coefficient | Standard Error Coefficient | t-Value | p-Value | |

Constant | 12.941 | 2.154 | 6.01 | 0.000 | |

D | −2.6736 | 0.3357 | −7.96 | 0.000 | |

V | 0.013678 | 0.007758 | 1.76 | 0.096 | |

f | −15.58 | 11.35 | −1.37 | 0.188 | |

D*D | 0.14025 | 0.01598 | 8.78 | 0.000 | |

V*V | −0.00005493 | 0.00002557 | −2.15 | 0.046 | |

f*f | 4.67 | 25.57 | 0.18 | 0.857 | |

D*V | −0.0001225 | 0.0004520 | −0.27 | 0.790 | |

D*f | 2.6250 | 0.4520 | 5.81 | 0.000 | |

V*f | −0.02330 | 0.01808 | −1.29 | 0.215 |

No. of Hidden Neurons | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

1 | 2.35 × 10^{−4} | 6.36 × 10^{−4} | 1.46 × 10^{0} | 2.93 × 10^{0} |

2 | 8.42 × 10^{−5} | 1.05 × 10^{−4} | 9.84 × 10^{−1} | 1.14 × 10^{0} |

3 | 2.72 × 10^{−7} | 1.53 × 10^{−4} | 5.32 × 10^{−2} | 1.36 × 10^{0} |

4 | 2.20 × 10^{−6} | 6.10 × 10^{−5} | 1.36 × 10^{−1} | 8.23 × 10^{−1} |

5 | 1.78 × 10^{−15} | 1.31 × 10^{−4} | 2.50 × 10^{−6} | 9.45 × 10^{−1} |

6 | 3.40 × 10^{−22} | 4.42 × 10^{−4} | 2.04 × 10^{−9} | 2.22 × 10^{0} |

7 | 4.98 × 10^{−24} | 2.59 × 10^{−4} | 2.12 × 10^{−10} | 2.02 × 10^{0} |

8 | 5.03 × 10^{−23} | 2.60 × 10^{−4} | 6.41 × 10^{−10} | 2.06 × 10^{0} |

No. of Hidden Neurons | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

1 | 7.39 × 10^{−4} | 1.57 × 10^{−3} | 4.75 × 10^{0} | 6.86 × 10^{0} |

2 | 4.11 × 10^{−4} | 1.08 × 10^{−3} | 3.53 × 10^{0} | 3.48 × 10^{0} |

3 | 1.14 × 10^{−4} | 9.34 × 10^{−4} | 1.65 × 10^{0} | 4.15 × 10^{0} |

4 | 6.65 × 10^{−6} | 6.66 × 10^{−4} | 3.32 × 10^{−1} | 3.20 × 10^{0} |

5 | 5.36 × 10^{−6} | 1.14 × 10^{−3} | 3.59 × 10^{−1} | 4.10 × 10^{0} |

6 | 5.53 × 10^{−7} | 1.86 × 10^{−4} | 1.30 × 10^{−1} | 2.19 × 10^{0} |

7 | 6.60 × 10^{−23} | 7.81 × 10^{−4} | 6.79 × 10^{−10} | 4.36 × 10^{0} |

8 | 8.68 × 10^{−23} | 1.45 × 10^{−3} | 1.36 × 10^{−9} | 6.01 × 10^{0} |

No. of Hidden Neurons | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

1 | 9.92 × 10^{−3} | 2.01 × 10^{−2} | 1.08 × 10^{−1} | 1.94 × 10^{1} |

2 | 1.20 × 10^{−3} | 6.62 × 10^{−4} | 4.09 × 10^{0} | 2.70 × 10^{0} |

3 | 1.59 × 10^{−4} | 2.78 × 10^{−4} | 1.34 × 10^{0} | 1.98 × 10^{0} |

4 | 1.30 × 10^{−4} | 1.39 × 10^{−4} | 1.21 × 10^{0} | 1.51 × 10^{0} |

5 | 9.99 × 10^{−5} | 1.60 × 10^{−4} | 1.12 × 10^{0} | 1.52 × 10^{0} |

6 | 9.07 × 10^{−5} | 1.65 × 10^{−4} | 1.02 × 10^{0} | 1.45 × 10^{0} |

7 | 8.19 × 10^{−5} | 1.14 × 10^{−4} | 9.42 × 10^{−1} | 1.13 × 10^{0} |

8 | 7.67 × 10^{−5} | 1.32 × 10^{−4} | 9.29 × 10^{−1} | 1.35 × 10^{0} |

No of Hidden Neurons | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

1 | 3.31 × 10^{−2} | 5.94 × 10^{−2} | 3.03 × 10^{1} | 3.60 × 10^{1} |

2 | 5.60 × 10^{−3} | 1.26 × 10^{−2} | 1.37 × 10^{1} | 2.13 × 10^{1} |

3 | 4.20 × 10^{−3} | 1.05 × 10^{−2} | 1.14 × 10^{1} | 1.98 × 10^{1} |

4 | 2.74 × 10^{−3} | 1.05 × 10^{−2} | 9.51 × 10^{0} | 2.04 × 10^{1} |

5 | 2.65 × 10^{−3} | 1.23 × 10^{−2} | 9.52 × 10^{0} | 2.38 × 10^{1} |

6 | 2.37 × 10^{−3} | 7.83 × 10^{−3} | 9.20 × 10^{0} | 1.83 × 10^{1} |

7 | 6.49 × 10^{−4} | 1.24 × 10^{−3} | 4.63 × 10^{0} | 6.27 × 10^{0} |

8 | 4.82 × 10^{−4} | 8.73 × 10^{−4} | 3.75 × 10^{0} | 5.81 × 10^{0} |

No of Clusters | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

2 | 4.94 × 10^{−5} | 3.68 × 10^{−4} | 8.03 × 10^{−1} | 2.49 × 10^{0} |

3 | 1.37 × 10^{−5} | 4.47 × 10^{−4} | 3.60 × 10^{−1} | 2.74 × 10^{0} |

4 | 2.80 × 10^{−6} | 1.20 × 10^{−4} | 1.40 × 10^{−1} | 1.07 × 10^{0} |

No of Clusters | MSEtrain | MSEtest | MAPEtrain (%) | MAPEtest (%) |
---|---|---|---|---|

2 | 5.39 × 10^{−4} | 1.49 × 10^{−3} | 3.66 × 10^{0} | 8.38 × 10^{0} |

3 | 2.75 × 10^{−5} | 2.52 × 10^{−3} | 8.22 × 10^{−1} | 1.03 × 10^{1} |

4 | 5.90 × 10^{−6} | 2.63 × 10^{−3} | 3.16 × 10^{−1} | 7.78 × 10^{0} |

Type of model | MAPEtotal (F_{z}) | MAPEtotal (M_{z}) |
---|---|---|

Multiple Regression | 0.86305% | 3.0495% |

MLP | 0.2979% | 0.9667% |

RBF-NN | 0.978% | 4.129% |

ANFIS | 0.312% | 1.698% |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Karkalos, N.E.; Efkolidis, N.; Kyratsis, P.; Markopoulos, A.P.
A Comparative Study between Regression and Neural Networks for Modeling Al6082-T6 Alloy Drilling. *Machines* **2019**, *7*, 13.
https://doi.org/10.3390/machines7010013

**AMA Style**

Karkalos NE, Efkolidis N, Kyratsis P, Markopoulos AP.
A Comparative Study between Regression and Neural Networks for Modeling Al6082-T6 Alloy Drilling. *Machines*. 2019; 7(1):13.
https://doi.org/10.3390/machines7010013

**Chicago/Turabian Style**

Karkalos, Nikolaos E., Nikolaos Efkolidis, Panagiotis Kyratsis, and Angelos P. Markopoulos.
2019. "A Comparative Study between Regression and Neural Networks for Modeling Al6082-T6 Alloy Drilling" *Machines* 7, no. 1: 13.
https://doi.org/10.3390/machines7010013