Next Article in Journal
A Comparative Study on the Structure and Quality of SLM and Cast AISI 316L Samples Subjected to WEDM Processing
Previous Article in Journal
Composition of Organosilicate Coatings High-Temperature Breakdown Products and Their Distribution in the Weld
Previous Article in Special Issue
A Hybrid Grey Wolf Optimizer for Process Planning Optimization with Precedence Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural-Network-Based Approaches for Optimization of Machining Parameters Using Small Dataset

by
Aleksandar Kosarac
1,
Cvijetin Mladjenovic
2,*,
Milan Zeljkovic
2,
Slobodan Tabakovic
2 and
Milos Knezev
2
1
Faculty of Mechanical Engineering, University of East Sarajevo, 71123 Istočno Sarajevo, Bosnia and Herzegovina
2
Department of Production Engineering, Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
Materials 2022, 15(3), 700; https://doi.org/10.3390/ma15030700
Submission received: 25 December 2021 / Revised: 10 January 2022 / Accepted: 13 January 2022 / Published: 18 January 2022
(This article belongs to the Special Issue Intelligent Machining: Process Optimisation)

Abstract

:
Surface quality is one of the most important indicators of the quality of machined parts. The analytical method of defining the arithmetic mean roughness is not applied in practice due to its complexity and empirical models are applied only for certain values of machining parameters. This paper presents the design and development of artificial neural networks (ANNs) for the prediction of the arithmetic mean roughness, which is one of the most common surface roughness parameters. The dataset used for ANN development were obtained experimentally by machining AA7075 aluminum alloy under various machining conditions. With four factors, each having three levels, the full factorial design considers a total of 81 experiments that have to be carried out. Using input factor-level settings and adopting the Taguchi method, the experiments were reduced from 81 runs to 27 runs through an orthogonal design. In this study we aimed to check how reliable the results of artificial neural networks were when obtained based on a small input-output dataset, as in the case of applying the Taguchi methodology of planning a four-factor and three-level experiment, in which 27 trials were conducted. Furthermore, this paper considers the optimization of machining parameters for minimizing surface roughness in machining AA7075 aluminum alloy. The results show that ANNs can be successfully trained with small data and used to predict the arithmetic mean roughness. The best results were achieved by backpropagation multilayer feedforward neural networks using the BR algorithm for training.

1. Introduction

Surface quality is one of the most important indicators of the quality of machined parts [1]. The arithmetic mean roughness (Ra) represents a measure of the surface quality [2]. The arithmetic mean roughness is influenced by machining parameters and tool geometry. The analytical method of defining the arithmetic mean roughness is not applied in practice due to its complexity and empirical models are applied only for certain values of machining parameters.
The design of machine parts very often focuses on dimensional and form tolerances. In cases where the quality of the surface has significant importance and requires an indicator, the arithmetic mean roughness (Ra) is often used. Some researchers have investigated the influence of cutting parameters (cutting speed, feed rate, axial and radial depth of the cut) on the arithmetic mean roughness (Ra) [3,4,5,6,7,8,9].
It can be noted that cutting speed, feed rate and the depth of the cut are the most dominant factors in these studies, even though some researchers have used other factors which can influence surface roughness, such as vibration or tool wear [10,11,12].
Some of the researchers have examined the influence of different cooling/lubricating techniques as factors influencing the arithmetic mean roughness (Ra) [13] or have analyzed processes operating under dry conditions [14]. On the other hand, some of the authors have use other influencing parameters such as the chips’ characteristics [15] or tool geometry parameters [14,16,17,18,19].
In all the above-mentioned studies, the most often used workpiece materials were aluminum alloys, magnesium alloys, superalloys (Inconel 718 and Titanium alloy) and hardened steel.
The focus of this study is on the modeling of artificial neural networks for the prediction of the arithmetic mean roughness (Ra) in milling. Previous studies showed that neural networks can be applied for surface roughness predictions in different machining operations such as turning [5,9,10,11,14,16,19], milling [3,4,6,7,8,13,15,17,18,20,21,22] and drilling [12].
To reduce the cost of experiments, researchers often use the Taguchi design of experiments [5,6,15,17,20,21], although when the number of the factors and levels in the Taguchi design of the experiment is not too high, an experiment can be conducted as a full factorial [4,14]. The input-output datasets used in all the above-mentioned studies for handle surface roughness predictions can be considered small.
Many researchers have used the trial-and-error method to improve the performance of neural networks. That means the application of different network topologies, i.e., different numbers of hidden layers and numbers of neurons incorporated into them, different training algorithms, learning parameters, etc. Through this process, neural networks are tested and evaluated, and then the optimal structure is determined.
The neural networks most frequently used to predict the arithmetic mean roughness (Ra) are feed-forward backpropagation neural networks and radial basis functions. Munoz-Escalona et al. [15], Šarić et al. [13] and Fang et al. [16] compared both network types to find the optimal model, whereas Hossain et al. [6], Al Hazza et al. [3] and Zain et al. [18] used feed-forward backpropagation neural networks, etc.
Table 1 and Table 2 show a literature review of recent work on arithmetic mean roughness (Ra) prediction using neural networks. Table 1 contains data on the machining process, materials to cut and cutting conditions, and Table 2 contains information on the ANN type, the structure of the neural network and the dataset at disposal for network training.
Researchers have tested different network topologies and training algorithms to obtain good prediction results, and Benardos et al. [20] found that the 5-3-1 topology and the LM training algorithm showed the best performance. Similarly, Alharthi et al. [4] achieved good results in predicting Ra using a 3-6-1 neural network topology and the momentum algorithm. Al Hazza et al. [3] proposed a 3-20-4-4 network topology and an LM algorithm. Vardhan et al. [22] found that a network topology of 5-8-8-2 had the best prediction performance, etc.
From Table 2 it can be seen that dataset size varied from 18 up to 304 samples, but most of the presented studies used 27 samples. The ratio of the training, validation and testing samples varies depending on the training algorithm, so Al Hazza et al. [3] used a 70:15:15 ratio, Ezugwua et al. [10] 50:25:25 and 67:33, Zain et al. [18] 85:15, Alharthi et al. [4] 80:20, etc.
How big does a dataset need to be for one to make a performance prediction? The answer to this question is not simple and depends first of all on the problem’s complexity and the learning algorithm’s complexity. The way to check this is to train neural networks of different architectures using the available data set, perform a simulation and compare the results to the data that can be considered accurate.
In this paper, we present the development of artificial neural networks (ANNs) for the prediction of the arithmetic mean roughness, which is one of the most common surface roughness parameters. The dataset used for ANN development was obtained experimentally by machining AA7075 aluminum alloy under various machining conditions. The experiment was conducted based on the full factorial plan with four factors and three levels each, whereas the arithmetic mean roughness was selected as a response.
For four factors having three levels each, the full factorial design considers a total of 81 experimental runs that have to be carried out. Using input factor-level settings and adopting the Taguchi method, the experiments could be reduced from 81 runs to 27 runs by means of an orthogonal design.
The dataset used for ANN training contained 27 input-output parameters, corresponding to the L27 orthogonal array. A certain number of neural networks with different architectures was developed and evaluated.
The neural networks which showed the best performance after evaluation were then simulated. For network simulation, we used input pairs derived from the full factor plan, but these were not used in network training. The results of the ANN simulation were then compared with the output data set obtained experimentally.
Furthermore, this studied demonstrates the reliability of artificial neural networks based on a small input-output dataset.

2. Experimental Procedure

This paper investigates the effects of four factors, each having three levels, on the surface roughness: cutting speeds v, feed rate f, axial depth of cut a and the types of coolants/lubricants. Experimental factors and their levels are given in Table 3.
Experiments were conducted in a Emco Concept Mill 250 milling center, machining AA7075 aluminum alloy under various machining conditions. All experimental runs were performed under identical machining conditions and using the same machine tool. The chemical composition of AA7075 aluminum alloy is given in Table 4.
A solid carbide multi-flute end mill Φ12 × 33/80 series WAE303A was used for the experiment. This tool is designed for the cutting of aluminum alloys, brass and bronze, is uncoated and has three flutes, and the angle of the helix is 45 degrees.
The numbers of revolutions per minute used in the experiments (n1 = 3185 rpm, n2 = 4246 rpm, and n3 = 5308 rpm) were obtained based on adopted upper and lower cutting speed values, using the formula n = (1000 · v)/(D · π), where D is cutter diameter and v is cutting speed. The feed rate per minute is obtained based on the determined number of revolutions and selected feed per tooth (Table 3) using the formula s = n∙sz∙z, where n is rpm, sz is the feed per tooth and z is the cutting tool number of flutes. The minimum feed rate used in experiments (smin = 478 mm/min) corresponds to the smallest number of revolutions (n1 = 3185 rpm) and the lowest value of feed per tooth (sz1 = 0.05 mm/tooth). The maximum feed rate (smax = 2389 mm/min) corresponds to a maximum rpm (n3 = 5308 rpm) and maximum feed per tooth (sz3 = 0.15 mm/tooth).
The sample parts were cut from a 20 mm-thick AA7075 plate to dimensions 30 × 30 × 20, fixed by a general-purpose milling vise. Machining was performed on both sample part sides using the climb milling method. In a milling operation, the sample parts were fed along the cutter’s direction of rotation, since this ensures that the machined surface has a better quality, Figure 1. The axial depth of cut was a1 = 0.6 mm, a2 = 0.8 mm and a3 = 0.1 mm, and the radial depth of the cut had a constant value of a = 15 mm. The following cooling/lubricating methods were used: air, dry cutting and emulsion.
The arithmetic mean roughness Ra was measured using a Mitutoyo SJ-210, Mitutoyo Europe GmbH, Neuss, Germany measuring device. The device had the following characteristics: cutoff filter values λc equal to 0.08, 0.25, 0.8 and 2.5 mm; a phase-matched Gaussian filter according to DIN EN ISO 16610-21; and measuring speeds when measuring of 0.25, 0.50 and 0.75 mm/s, when returning 1 mm/s. The measuring force/stylus type was 0.75 mN/2 μmR 60°. For this experiment, the settings of the device were determined according to the expected value of the Ra and were equal to λf = 2.5 μm, λc = 0.08 mm, ln = 4 mm.
Semi-synthetic metalworking fluid BIOL MIN-E, quality level ISO 6743/7. was used for wet machining. This fluid can be used for machining all types of steel and cast iron, non-ferrous and light metals.

3. Taguchi Method for Optimization of Cutting Parameters

The influence of various factors on the quality of the surface was determined experimentally. Implementation of the full factorial experiment means carrying out the maximum number of experimental runs determined by the number of factors and levels. This means a longer time and thus a higher cost of the experiment. The Taguchi method is one of the most commonly used methods in the design of experiments. This method allows fewer experimental runs compared to the full factorial plan. The main goal of this method is to define a minimum number of experimental runs, which will contain the optimal combination of factors and levels. Subsequent data analysis is based on statistical methods. It determines the optimal conditions, in order to obtain a minimum of the cost function. The Taguchi optimization method is often used to obtain low surface roughness in various cutting operations [23,24,25,26,27].
A measure of quality characteristics, the signal-to-noise ratio, is observed. The signal-to-noise ratio represents a deviation from the desired value.
For the optimization of static problems, there are three signal-to-noise ratios [28]:
  • Smaller is better:
S N = 10 l o g ( 1 n i = 1 n y i 2 )  
  • Bigger is better:
S N = 10 l o g ( 1 n i = 1 n 1 y i 2 )  
  • Nominal is the best:
S N = 10 l o g y ¯ s y 2  
In the above equations, S/N is the signal-to-noise ratio, n is the number of responses, yi is the response for the factor/level combination, y ¯ is the mean of the response for the factor/level combination and s y 2 is the standard deviation of the response for a given factor/level combination. The categories of the S/N ratio are selected based on the characteristics of the quality.
The goal of this experiment was the determination of the arithmetical mean roughness (Ra), and the selected S/N ratio of interest corresponded to the “Smaller is better” criterion. In this case, a full factorial experiment entailed 34 = 81 experimental runs. Using the Taguchi method, the experiments were reduced from 81 runs to 27 runs by means of an orthogonal design. The orthogonal matrix shown in Table 5 has 27 rows, representing the number of experimental runs.
The last column in Table 5 contains the S/N values based on measuring the obtained Ra values, using the “Smaller is better” criterion and taking into consideration Equation (1). Based on S/N, it is possible to find out which parameters are the most influential in regard to the arithmetical mean roughness value, Ra.
An arithmetic mean roughness (Ra) table for cutting speed, axial depth of cut, feed per tooth and cooling/lubricating techniques was created in an integrated manner and the results are presented in Table 4. A greater S/N value corresponds to better performance. Therefore, the optimal level of the arithmetic mean roughness (Ra) is the level with the greatest S/N value, obtained at cutting speed 120 m/min (level 1), axial depth of cut 0.8 mm (level 2), feed per tooth 0.05 mm/tooth (level 1) and air cooling (level 3).
Figure 2 shows S/N values for all factors/levels based on the data shown in Table 6. The slope of the line that joins the different parameter levels determines the contribution of each factor to the arithmetical mean roughness value, Ra. If the difference in the S/N ratio between levels is remarkable, the factor is more significant. The opposite is also true. If the difference in the S/N ratio from one level to another is negligible, that indicates that the factor is insignificant.
Figure 2 and Table 6 show that the feed rate had a more significant influence on the arithmetic mean roughness, Ra. The influence of the cooling and lubricating technique was less significant, whereas the influence of the depth of the cut and the cutting speed were the least significant. The optimization of the studied factors concerning the “Smaller is better” criterion provided the optimal combination, coded as 1-2-1-3. That implies the following input parameters: cutting speed 120 m/min, cutting depth 0.8 mm, feed per tooth 0.05 mm/tooth, and air cooling (Table 7).
The minimum of the arithmetical mean roughness, Ra, determined analytically based on the optimal input parameter combination, had a value of Ra = 0.262458. Since the 1-2-1-3 combination does not exist in the orthogonal array, a confirmation experiment needed to be conducted.
The experimentally obtained value of Ra for the 1-2-1-3 combination of input parameters was Ra = 0.2556 μm, which confirms the results obtained when applying the presented method.

4. ANOVA Technique

ANOVA is a statistical technique used to test the significance of the main factors comparing a mean square to an estimate of the experimental flaw at a certain level of confidence. In this study, the arithmetic mean roughness Ra obtained experimentally (Table 5, column 9) was analyzed using ANOVA. Analysis of variance illustrates the degree of importance of each factor that prominently influenced the arithmetic mean roughness Ra. Table 8 shows the ANOVA results for the arithmetic mean roughness Ra.
Columns 5 and 6 in Table 6 demonstrate the significance rates of the process parameters on the arithmetic mean roughness Ra, i.e., depicting whether the factors had a noticeable impact on the Ra.
Based on the F distribution tables, the ratios corresponding to 95% and 99% confidence levels are F 0.05, 2.18 = 3.5560 and F 0.01, 2.18 = 6.0129. That means that the cutting speed, feed rate and cooling/lubricating presented physical and statistical significance, since F > Fα = 5% and F > Fα = 1.
The most influential parameter affecting the surface roughness was the feed, at 91.392%. This also confirms results obtained using the Taguchi methodology, shown in Figure 2 and Table 6.
A relatively small error value of 1.254% indicates the following:
  • The influence of parameters that we did not consider in the experiment on the arithmetic mean roughness, Ra, was negligible;
  • The small value of the error indicates that the experiment can be considered successful.

5. Artificial Neural Networks

The artificial neural network is a data processing system inspired by the human biological neural network. ANNs use datasets obtained experimentally or analytically to model the behavior of a system with several influencing factors. An ANN has an input layer, one or more hidden layers and an output layer. Each layer consists of one or more elementary units called neurons [30]. The neuron is considered a processor, having one or more inputs and one output. A number of factors and the number of outputs define the number of neurons in the input and output layers. The number of neurons in the hidden layer is determined according to Equations (4)–(6) [31].
n = 2 ( N x + N y ) 3
n < 2 N x
n = N x · N y
A desirable approach in defining a neural network’s architecture is to create several different architectures (topologies) of artificial neural networks and apply different training algorithms, then performing an evaluation of the performance using different criteria. Of the multiple training methods for neural networks, this paper covers only supervised learning. Supervised learning involves providing the inputs and correct outputs, meaning that the network processes the inputs and compares the results to desired outputs. The errors (differences between desired outputs and network outputs) are backpropagated throughout the system, leading to adjustments of the weights which control the network. The dataset used for developing the ANNs contained 27 samples, belonging to the initial set of 81 samples. The NN model trained with 27 samples is then presented (in the simulation process) with an experimentally obtained input set. The simulation input set is completely unknown to the network since none of the elements belonging to it was used in training. The aim is to estimate the accuracy of the network trained using a small dataset. Matlab software, i.e., its graphical environment NNToolbox, was used for neural network development. Samples were first divided into three sets: 70% of the dataset (19 samples) was used for training, 15% of the dataset (four samples) was used for validation and 15% of the dataset (five samples) was used for testing.
The Matlab function dividend ensures that the network performs the same kind of data division when it is trained: the training dataset contained samples 1 to 19, the validation dataset contained samples 20 to 23, and the testing dataset contained samples 24 to 27. In total, 108 different neural models were developed, using 18 different network architectures. Each of the selected network architectures was trained using six learning algorithms: Levenberg–Marquardt (LM), Bayesian regulation (BR), resilient backpropagation (RP), gradient descent (GDX), quasi-Newton backpropagation (BFG) and scaled conjugate gradient backpropagation (SCG). The sigmoid and linear functions were used as transfer functions. The sigmoid transfer function was used between the input and hidden layer, sigmoid or linear were used between the hidden layers and the linear transfer function was used between the hidden and output layer.
Data Preprocessing
Several preprocessing techniques can be used to prepare numerical datasets. Linear scaling is one of them, in which variables are scaled linearly to adjust the weight ratios, usually in the interval from 0 to 1. In this paper, linear scaling of input and target variables was in the range 0.1–0.9 according to Equations (7) and (8) [9,10,32].
x scal = x x min x max x min ( x ¯ max x ¯ min )
y scal = y y min y max y min ( y ¯ max y ¯ min )
In this case, x ¯ max and y ¯ max represent the minimum or maximum value of the range in which the data are scaled, and xmax, xmin, ymax and ymin represent the largest and smallest values in the input and output data set, respectively. Experiments can be designed in many different ways.
Neural Network Evaluation
Figure 3 shows the procedure of defining the input-output dataset based on the full factor plan, training the neural network with data corresponding to the L27 orthogonal array, and network simulation with an input data set. A total of 108 artificial neural networks with different architectures, as well as different training algorithms, were developed to predict the arithmetic mean roughness under different machining conditions (Appendix A). Out of that number, 18 networks with the best characteristics were selected. Estimation of characteristics was carried out based on the value of the mean square error (MSE) concerning the validation dataset and correlation coefficient (R) concerning the test dataset, Table 7. Next, each of the selected 18 models was simulated with input data set containing 81 samples derived from the experiment.
Of that number, 54 sets were completely unknown to the network, whereas 27 were known. The outputs from the simulation using the neural networks were then compared with the output set obtained experimentally. Based on the comparison of the absolute error of the simulation models output and the output determined experimentally, a conclusion was made about the qualityof the neural networks obtained based on a small set of data. Table 9 shows the mean square error between actual values obtained experimentally and estimated values (output of NN simulated by input set)
The lowest value of the MSE 0.0025 had the neural network architecture of 4 (10) 1, trained using the Bayesian regulation (BR) algorithm (Figure 4).

6. Results and Discussions

After the neural network was selected, a simulation was carried out using the input/output dataset provided through the experiment. The input/output dataset used for simulation and to compare results contained 54 data, which were not used in the network training. Network performance was evaluated by comparing the results generated by the neural network and the experimentally determined output values. Figure 5 compares the arithmetic mean roughness (Ra) values predicted by the NN trained using the BR algorithm and Ra values obtained experimentally.
The relative error for 54 simulation outputs was calculated according to “Equation (9)”:
δ = | R a e x p R a N N R a e x p | × 100 %
where δ is the relative output error, Raexp is the value of the arithmetic mean roughness (Ra) determined experimentally and RaNN is the output value obtained by the network. Based on the relative error value, six error intervals were established (0–5%, 5–10%, 10–15%, 15–20%, 20–25% and more than 25%). Figure 6 shows the neural network error distribution in arithmetic mean roughness prediction.
The relatively high error value (above 20%) was due to the small range of output values. The regression plot for the simulation is shown in Figure 7, i.e., network outputs vs. experimentally-obtained outputs. If the network outputs were equal to the targets (experimentally obtained data), the data should fit a 45-degree line (dashed gray line). In this study, the fit can be considered good for all data sets, with R values of 0.9758 and linear regression coefficients x = 1.02266 and y = −8.07479 × 10−17 (red solid line). The mean square error (MSE) was equal to 0.00444271. The optimal combination of input parameters was 1-2-1-3. The minimum arithmetical mean roughness, Ra, was determined through the simulation using the neural network and this input set. The result after re-scaling was Ra = 0.32 μm.

7. Conclusions

The ability to predict surface roughness parameters under given machining conditions is important from an economic perspective as well as from the point of view of assessing the mechanical properties of a part and its function. This paper provides an overview of research related to the prediction of arithmetic mean roughness (Ra) using artificial neural networks in the end milling of aluminum alloy. The most frequently used factors in the presented studies were the feed rate, cutting speed and the depth of the cut. Moreover, different cooling/lubrication techniques, vibration and tool wear also have an impact on surface roughness.
In this paper, we analyzed the influence of the feed rate, cutting speed, radial depth of the cut and cooling/lubricating technique on surface roughness.
This study shows that:
  • When designing and conducting an experiment using the Taguchi technique, the number of samples is smaller in comparison to the full factor plan. In such a case, neural networks handle small datasets of experimental data. Bearing in mind that the set of available data is small, it is necessary to carefully plan the neural network topology and algorithms for training.
  • In this study, the dataset used for ANN development contained 27 samples. The full factorial plan was used to simulate and evaluate the neural network.
  • Improving neural network performance is possible through the trial-and-error method. This means including different training algorithms, different numbers of hidden layers and neurons, learning parameters, the transfer function, etc. Of the 108 developed neural network models, a topology consisting of four neurons in the input layer, one hidden layer with ten neurons and one neuron in the output layer (4-10-1) was found to have the lowest value of MSE, equal to 0.00444271. The 4-10-1 network structure was trained using the BR algorithm. The results and the levels of the mean square error (MSE) were acceptable in terms of the proposed model for the prediction of arithmetic mean roughness (Ra).
  • Twenty-seven input-output pairs were consider as a small dataset in the context of neural network modeling. This study shows that it is possible to obtain a good prediction and that the small dataset is not an obstacle.
  • Compared to conventional methods, the advantages of using ANNs are their higher speed, simplicity and the possibility of learning based on examples.
  • A disadvantage is that the application of this method requires an experimentally determined dataset, which can be expensive and time-consuming in some cases.
The presented research shows that artificial neural networks can be used to predict arithmetic mean roughness (Ra) in a short time and with good reliability, despite using a small dataset. Tackling the reverse problem can be a way of extending this research. That approach would imply defining the cutting parameters based on the desired value of surface roughness.

Author Contributions

Conceptualization, A.K. and M.Z.; methodology, A.K. and C.M.; software, A.K. and M.K.; validation, M.Z. and S.T.; investigation, A.K. and C.M.; resources, A.K. and S.T.; writing—original draft preparation, A.K.; writing—review and editing, M.Z. and C.M.; visualization, M.K. and C.M.; supervision, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The paper presents a part of the research results of the project “Innovative scientific and artistic research from the FTS domain”, No. 451-03-68/2020-14/200156, supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Results for All NN Models [29].
Table A1. Results for All NN Models [29].
NN ArchitectureRNumber of EpochsMSE NN ArchitectureRNumber of EpochsMSE
Levenberg–Marquardt (LM) algorithm Bayesian regulation (BR) algorithm
14 × 1 × 10.9132260.002014 × 1 × 10.98901410.00189
24 × 2 × 10.9556110.001924 × 2 × 10.99467440.00096
34 × 3 × 10.9769860.002734 × 3 × 10.99430970.00027
44 × 4 × 10.98390.004744 × 4 × 1−0.9837580.056499
54 × 8 × 10.97970.00654 × 8 × 10.982221252.14 × 10−14
64 × 10 × 10.79960.01464 × 10 × 10.980511731.75 × 10−14
74 × 1 × 1 × 10.97670.001274 × 1 × 1 × 10.9890910000.00189
84 × 2 × 2 × 10.97580.005784 × 2 × 2 × 10.98356370.0577
94 × 3 × 2 × 10.94360.001294 × 3 × 2 × 10.9838370.0577
104 × 5 × 2 × 10.996490.00074104 × 5 × 2 × 10.9838380.0577
114 × 8 × 4 × 10.84680.027114 × 8 × 4 × 10.98402380.0577
124 × 10 × 4 × 10.89080.019124 × 10 × 4 × 10.9838380.0577
134 × 2 × 2 × 2 × 10.960190.032134 × 2 × 2 × 2 × 1−0.78219860.0577
144 × 3 × 2 × 2 × 10.965170.0018144 × 3 × 2 × 2 × 10210.0667
154 × 4 × 3 × 2 × 10.9837150.0013154 × 4 × 3 × 2 × 10350.0643
164 × 5 × 3 × 2 × 10.96080.013164 × 5 × 3 × 2 × 10.91736320.0662
174 × 7 × 4 × 2 × 10.950120.006174 × 7 × 4 × 2 × 10310.0664
184 × 10 × 4 × 2 × 10.9971340.00020184 × 10 × 4 × 2 × 10.90476410.0658
Resilient backpropagation (RP) algorithm Gradient descent (GD ×) algorithm
14 × 1 × 10.9751363.16 × 10−514 × 1 × 10.975651410.00058136
24 × 2 × 10.943660.01083324 × 2 × 1−0.9199560.006586
34 × 3 × 10.963360.002462634 × 3 × 10.940191400.0042864
44 × 4 × 10.8273960.004455144 × 4 × 10.5879880.0028835
54 × 8 × 10.9787460.01240354 × 8 × 10.75491860.0060558
64 × 10 × 10.99259110.02115264 × 10 × 1−0.9137260.031521
74 × 1 × 1 × 10.9148907160.001157474 × 1 × 1 × 10.82356740.0037192
84 × 2 × 2 × 10.2966870.01967784 × 2 × 2 × 10.27105810.033783
94 × 3 × 2 × 10.05301780.03985794 × 3 × 2 × 10.89452770.042865
104 × 5 × 2 × 10.95375170.010477104 × 5 × 2 × 10.913281170.012526
114 × 8 × 4 × 10.99162190.012794114 × 8 × 4 × 10.76334740.0075099
124 × 10 × 4 × 10.9684960.0019671124 × 10 × 4 × 10.30189540.020986
134 × 2 × 2 × 2 × 10.96881300.0028199134 × 2 × 2 × 2 × 1−0.903229770.03963
144 × 3 × 2 × 2 × 1−0.05538390.019247144 × 3 × 2 × 2 × 1−0.24808770.03962
154 × 4 × 3 × 2 × 10.40091100.034651154 × 4 × 3 × 2 × 10.928291160.023847
164 × 5 × 3 × 2 × 10.96214180.0026711164 × 5 × 3 × 2 × 10.93969640.037863
174 × 7 × 4 × 2 × 10.93932150.002873174 × 7 × 4 × 2 × 10.938160.034819
184 × 10 × 4 × 2 × 10.93649150.0023631184 × 10 × 4 × 2 × 10.956151240.025836
Quasi-Newton backpropagation (BFG) algorithm Scaled conjugate gradient backpropagation (SCG) algorithm
14 × 1 × 10.97722130.00269114 × 1 × 10.97859150.0017497
24 × 2 × 10.92073320.004449824 × 2 × 10.92591170.012217
34 × 3 × 10.9232160.00751534 × 3 × 10.88958120.0026530
44 × 4 × 10.97149140.002425944 × 4 × 10.96801150.00099602
54 × 8 × 10.78725250.004470154 × 8 × 10.77885240.0061669
64 × 10 × 10.89511420.007509564 × 10 × 10.9683750.0030131
74 × 1 × 1 × 10.9739160.002373974 × 1 × 1 × 10.98276210.0005
84 × 2 × 2 × 10.94194450.002508384 × 2 × 2 × 10.9474130.0070227
94 × 3 × 2 × 10.8923370.04268594 × 3 × 2 × 1−0.72770.04
104 × 5 × 2 × 10.96475270.004755104 × 5 × 2 × 10.97979270.0015945
114 × 8 × 4 × 10.93033180.022565114 × 8 × 4 × 10.91423140.01604
124 × 10 × 4 × 10.90857140.00099236124 × 10 × 4 × 10.8150160.04416
134 × 2 × 2 × 2 × 10.84936130.026458134 × 2 × 2 × 2 × 10.055120.047
144 × 3 × 2 × 2 × 10.96006140.013682144 × 3 × 2 × 2 × 1−0.072670.040627
154 × 4 × 3 × 2 × 10.057505120.040046154 × 4 × 3 × 2 × 10.899110.0083887
164 × 5 × 3 × 2 × 10.9079190.017959164 × 5 × 3 × 2 × 10.97423280.0096273
174 × 7 × 4 × 2 × 10.7800390.032067174 × 7 × 4 × 2 × 10.463100.025448
184 × 10 × 4 × 2 × 10.96165200.0037442184 × 10 × 4 × 2 × 10.67140.014792

References

  1. Khorasani, A.M.; Yazdi, M.R.S.; Safizadeh, M.S. Analysis of machining parameters effects on surface roughness: A review. Int. J. Comput. Mater. Sci. Surf. Eng. 2012, 5, 68–84. [Google Scholar] [CrossRef]
  2. Lu, C. Study on prediction of surface quality in machining process. J. Mater. Processing Technol. 2008, 205, 439–450. [Google Scholar] [CrossRef]
  3. Al Hazza, M.H.; Adesta, E.Y. Investigation of the effect of cutting speed on the Surface Roughness parameters in CNC End Milling using Artificial Neural Network. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Kuala Lumpur, Malaysia, 2–4 July 2013; p. 012089. [Google Scholar]
  4. Alharthi, N.H.; Bingol, S.; Abbas, A.T.; Ragab, A.E.; El-Danaf, E.A.; Alharbi, H.F. Optimizing cutting conditions and prediction of surface roughness in face milling of AZ61 using regression analysis and artificial neural network. Adv. Mater. Sci. Eng. 2017, 2017, 7560468. [Google Scholar] [CrossRef] [Green Version]
  5. Beatrice, B.A.; Kirubakaran, E.; Thangaiah, P.R.J.; Wins, K.L.D. Surface roughness prediction using artificial neural network in hard turning of AISI H13 steel with minimal cutting fluid application. Procedia Eng. 2014, 97, 205–211. [Google Scholar] [CrossRef] [Green Version]
  6. Hossain, M.I.; Amin, A.N.; Patwari, A.U. Development of an artificial neural network algorithm for predicting the surface roughness in end milling of Inconel 718 alloy. In Proceedings of the 2008 International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 1321–1324. [Google Scholar]
  7. Huang, B.P.; Chen, J.C.; Li, Y. Artificial-neural-networks-based surface roughness Pokayoke system for end-milling operations. Neurocomputing 2008, 71, 544–549. [Google Scholar] [CrossRef]
  8. Kadirgama, K.; Noor, M.; Zuki, N.; Rahman, M.; Rejab, M.; Daud, R.; Abou-El-Hossein, K. Optimization of surface roughness in end milling on mould aluminium alloys (AA6061-T6) using response surface method and radian basis function network. Jordan J. Mech. Ind. Eng. 2008, 2, 209–214. [Google Scholar]
  9. Pal, S.K.; Chakraborty, D. Surface roughness prediction in turning using artificial neural network. Neural Comput. Appl. 2005, 14, 319–324. [Google Scholar] [CrossRef]
  10. Ezugwu, E.; Fadare, D.; Bonney, J.; Da Silva, R.; Sales, W. Modelling the correlation between cutting and process parameters in high-speed machining of Inconel 718 alloy using an artificial neural network. Int. J. Mach. Tools Manuf. 2005, 45, 1375–1385. [Google Scholar] [CrossRef]
  11. Radha Krishnan, B.; Vijayan, V.; Parameshwaran Pillai, T.; Sathish, T. Influence of surface roughness in turning process—An analysis using artificial neural network. Trans. Can. Soc. Mech. Eng. 2019, 43, 509–514. [Google Scholar] [CrossRef]
  12. Vrabe, M.; Mankova, I.; Beno, J.; Tuharský, J. Surface roughness prediction using artificial neural networks when drilling Udimet 720. Procedia Eng. 2012, 48, 693–700. [Google Scholar] [CrossRef] [Green Version]
  13. Saric, T.; Simunovic, G.; Simunovic, K. Use of neural networks in prediction and simulation of steel surface roughness. Int. J. Simul. Model. 2013, 12, 225–236. [Google Scholar] [CrossRef]
  14. Kumar, R.; Chauhan, S. Study on surface roughness measurement for turning of Al 7075/10/SiCp and Al 7075 hybrid composites by using response surface methodology (RSM) and artificial neural networking (ANN). Measurement 2015, 65, 166–180. [Google Scholar] [CrossRef]
  15. Muñoz-Escalona, P.; Maropoulos, P.G. Artificial neural networks for surface roughness prediction when face milling Al 7075-T7351. J. Mater. Eng. Perform. 2010, 19, 185–193. [Google Scholar] [CrossRef]
  16. Fang, N.; Pai, P.S.; Edwards, N. Neural network modeling and prediction of surface roughness in machining aluminum alloys. J. Comput. Commun. 2016, 4, 1–9. [Google Scholar] [CrossRef] [Green Version]
  17. Karagiannis, S.; Stavropoulos, P.; Ziogas, C.; Kechagias, J. Prediction of surface roughness magnitude in computer numerical controlled end milling processes using neural networks, by considering a set of influence parameters: An aluminium alloy 5083 case study. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2014, 228, 233–244. [Google Scholar] [CrossRef]
  18. Zain, A.M.; Haron, H.; Sharif, S. Prediction of surface roughness in the end milling machining using Artificial Neural Network. Expert Syst. Appl. 2010, 37, 1755–1768. [Google Scholar] [CrossRef]
  19. Zhong, Z.; Khoo, L.; Han, S. Prediction of surface roughness of turned surfaces using neural networks. Int. J. Adv. Manuf. Technol. 2006, 28, 688–693. [Google Scholar] [CrossRef]
  20. Benardos, P.; Vosniakos, G.C. Prediction of surface roughness in CNC face milling using neural networks and Taguchi’s design of experiments. Robot. Comput.-Integr. Manuf. 2002, 18, 343–354. [Google Scholar] [CrossRef]
  21. Eser, A.; Aşkar Ayyıldız, E.; Ayyıldız, M.; Kara, F. Artificial intelligence-based surface roughness estimation modelling for milling of AA6061 alloy. Adv. Mater. Sci. Eng. 2021, 2021, 5576600. [Google Scholar] [CrossRef]
  22. Vardhan, M.V.; Sankaraiah, G.; Yohan, M. Prediction of surface roughness & material removal rate for machining of P20 steel in CNC milling using artificial neural networks. Mater. Today Proc. 2018, 5, 18376–18382. [Google Scholar]
  23. Asiltürk, I.; Akkuş, H. Determining the effect of cutting parameters on surface roughness in hard turning using the Taguchi method. Measurement 2011, 44, 1697–1704. [Google Scholar] [CrossRef]
  24. Bagci, E.; Aykut, Ş. A study of Taguchi optimization method for identifying optimum surface roughness in CNC face milling of cobalt-based alloy (stellite 6). Int. J. Adv. Manuf. Technol. 2006, 29, 940. [Google Scholar] [CrossRef]
  25. Ghani, J.A.; Choudhury, I.; Hassan, H. Application of Taguchi method in the optimization of end milling parameters. J. Mater. Processing Technol. 2004, 145, 84–92. [Google Scholar] [CrossRef]
  26. Özsoy, N. Experimental investigation of surface roughness of cutting parameters in T6 aluminum alloy milling process. Int. J. Comput. Exp. Sci. Eng. (IJCESEN) 2019, 5, 105–111. [Google Scholar] [CrossRef]
  27. Ranganath, M.; Vipin, M.R.; Prateek, N. Optimization of surface roughness in CNC turning of aluminium 6061 using taguchi techniques. Int. J. Mod. Eng. Res. 2015, 5, 42–50. [Google Scholar]
  28. Kovač, P. Design of Experiment; University of Novi Sad, Faculty of Technical Sciences: Novi Sad, Serbia, 2015. [Google Scholar]
  29. Košarac, A.; Mlađenović, C.; Zeljković, M. Prediction of surface roughness in end milling using artificial neural network. In Proceedings of the International scientific conference ETIKUM 2021, Novi Sad, Serbia, 2–4 December 2021. [Google Scholar]
  30. Ficko, M.; Begic-Hajdarevic, D.; Cohodar Husic, M.; Berus, L.; Cekic, A.; Klancnik, S. Prediction of Surface Roughness of an Abrasive Water Jet Cut Using an Artificial Neural Network. Materials 2021, 14, 3108. [Google Scholar] [CrossRef]
  31. Miljković, Z.; Aleksendrić, D. Artifitial Nerural Networks, Solved Exercises and Elements of Theory; Faculty of Mechanical Engineering: Belgrade, Serbia, 2009. [Google Scholar]
  32. Sanjay, C.; Jyothi, C. A study of surface roughness in drilling using mathematical analysis and neural networks. Int. J. Adv. Manuf. Technol. 2006, 29, 846–852. [Google Scholar] [CrossRef]
Figure 1. Down milling.
Figure 1. Down milling.
Materials 15 00700 g001
Figure 2. Main effect plot for S/N ratio.
Figure 2. Main effect plot for S/N ratio.
Materials 15 00700 g002
Figure 3. Simulation using the developed NNs and the experimentally obtained dataset [29].
Figure 3. Simulation using the developed NNs and the experimentally obtained dataset [29].
Materials 15 00700 g003
Figure 4. ANN architecture: 4 (10) 1.
Figure 4. ANN architecture: 4 (10) 1.
Materials 15 00700 g004
Figure 5. Predicted Ra values in comparison to test dataset.
Figure 5. Predicted Ra values in comparison to test dataset.
Materials 15 00700 g005
Figure 6. Neural network error distribution in arithmetic mean roughness prediction.
Figure 6. Neural network error distribution in arithmetic mean roughness prediction.
Materials 15 00700 g006
Figure 7. The plot of data regression (simulation).
Figure 7. The plot of data regression (simulation).
Materials 15 00700 g007
Table 1. ANN for prediction of the arithmetic mean roughness (Ra) in different machining processes.
Table 1. ANN for prediction of the arithmetic mean roughness (Ra) in different machining processes.
Author, YearWork MaterialProcessCutting Conditions
Munoz-Escalona, et al. (2010) [15]AA7075-T7351Face millingExperimental input parameters: Cutting speed (600, 800, 1000 m/min), feed rate (0.1, 0.2, 0.3 mm/tooth), axial depth of cut (3, 3.5, 4 mm).
Feature extraction: chip width, and chip thickness.
Benardos et al. (2002) [20]AA Series 2Face millingExperimental input parameters: Cutting speed (300, 500, 700 m/min), feed rate (0.08, 0.14, 0.2 mm/tooth), depth of cut (0.25, 0.75, 1.2 mm), tool engagement (30%, 60%, 100%), cutting fluid (yes, no).
Alharthi et al. (2017) [4]AZ61 magnesium alloyFace millingExperimental input parameters: Cutting speed (500, 1000, 1500, 2000 m/min), feed rate (50, 100, 150, 200 mm/min), depth of cut (0.5, 1, 1.5, 2 mm).
Šarić et al. (2013) [13]S235JRG2 steelFace millingExperimental input parameters: Number of revolutions (400, 600, 800 rpm), feed rate (100, 300, 500 mm/min), depth of cut (0.5, 1, 1.5 mm), cooling /lubricating technique: without cooling, through the tool, outside cooling.
Kadirgamaa et al. (2008) [8]AA6061-T6End millingExperimental input parameters: Cutting speed (100, 140, 180 m/min), feed rate (0.1, 0.15, 0.2 mm/rev), axial depth of cut (0.1, 0.15, 0.2 mm), radial depth of cut (2, 3.5, 5 mm).
Huang et al. (2008) [7]AA6061End millingExperimental input parameters: Cutting speed (1750, 1800, 1850, 1900, 2050, 2100, 2200, 2250, 2300, 2400, 2500 rpm) depth of cut (0.04, 0.05, 0.06, 0.07, 0.08) and feed rate (6, 8, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20/min).
Karagiannis et al. (2014) [17]AA5083End millingExperimental input parameters: Cutting speed (5000, 6000, 7000 rpm), depth of cut (0.5, 1, 1.5 mm), and feed rate (0.05, 0.08, 0.1 mm/tooth)
Tool geometry parameters: core diameter (%), flute angle (°), rake angle (°), first relief angle (°) and second relief angle (°).
Output parameters: Ra, Ry, Rz.
Hossain et al. (2008) [6]Inconel 718End millingExperimental input parameters: Cutting speed (20, 30, 40 m/min), axial depth of cut (0.4, 0.6, 0.8 mm) and feed rate (0.04, 0.075, 0.11 mm/tooth).
Al Hazza et al. (2013) [3]AISI H13End millingExperimental input parameters: Cutting speed 150 up to 250 m/min, feed rate 0.05–0.15 mm/rev and depth of cut 0.1–0.2 mm.
Output parameters Ra, Rt, Rz, Rq.
Zain et al. (2010) [18]Titanium alloy (Ti-6A1-4V)End millingExperimental input parameters: Cutting speed (124.53, 130, 144.22, 160, 167.03 m/min), feed rate (0.025, 0.03, 0.046, 0.07, 0.083 mm/tooth), radial rake angle (6.2, 7, 9.5, 13.0, 14.8°).
Vardhan et al. (2018) [22]P20MillingExperimental input parameters: Nose radius (0.8, 1.2 mm), cutting speed (75, 80, 85, 90, 95 m/min), feed rate (0.1, 0.125, 0.75, 1, 1.25, 1.5 mm/tooth), axial depth of cut (0.5, 0.5, 0.8 mm), radial depth of cut (0.3, 0.4, 0.5, 0.6, 0.7 mm).
Output parameters: material removal rate (MRR) and surface roughness (Ra).
Eser et al. (2021) [21]AA6061MillingExperimental input parameters: Cutting speed (100, 150, 200 m/min), depth of cut (1, 1.5, 2 mm) and feed rate (0.1, 0.15, 0.2 mm/rev)
Fang et al. (2016) [16]AA 2024-T351TurningExperimental input parameters: Cutting speed (150, 250, 350 m/min), depth of cut (0.8 mm constant), tool nose radius (0.8 mm constant), feed rate (varied at five levels based on the ratio to the tool radius: 1.0, 1.5, 2.0, 2.5, and 3.0).
Krishnan et al. (2019) [11]AA6063TurningExperimental input parameters: Cutting speed (2000 rpm), feed rate (0.1 mm/rev), depth of cut (0.5 mm).
Feature extraction: frequency range grayscale value, major peak frequency (F1), and the principal component magnitude squared (F2).
Kumar et al. (2015) [14]AA7075/10/SiCp and
AA7075 Hybrid Composites
TurningExperimental input parameters: Cutting speed (80, 110, 140, 170 m/min), feed rate (0.05, 0.1, 0.15, 0.2 mm/rev), approaching angle (45, 60, 75, 90°). Experiments were conducted without cooling/lubricating media (dry cutting).
Zhong et al. (2006) [19]Aluminum and copperTurningExperimental input parameters: Tool insert grade (TiAlN coated carbide, PCD), tool insert nose radius (0.2, 0.4, 0.8 mm), tool insert rake angle (0, +5, +15 deg), work piece material (aluminum, coper), cutting speed (500, 1000, 2500 rev/min), feed rate (0.01, 0.1, 0.2 rev/min), depth of cut (0.05, 0.5, 1 mm).
Output parameters: Surface roughness parameters Ra, Rt.
Pal et al. (2005) [9]Mild steelTurningExperimental input parameters: Cutting speed (325, 420, 550 m/min), depth of cut (0.2, 0.5, 0.8 mm) and feed rate (0.04, 0.1, 0.2 mm/rev).
Beatrice et al. (2014) [5]Hardened steel H13TurningExperimental input parameters: Feed rate (0.05, 0.075, 0.1 mm/rev), cutting speed (75, 95, 115 m/min), depth of cut (0.5, 0.75, 1 mm).
Ezugwu et al. (2005) [10]Inconel 718TurningExperimental input parameters: Cutting speed (20, 30, 40, 50 m/min), feed rate (0.25, 0.30 mm/rev), cutting time (312, 774 s) and the coolant delivery pressure (110, 150, 203 bar). Seven output parameters were observed: cutting force (Fz) and feed force (Fx), power consumption (P), surface roughness (Ra), average flank wear (VB), maximum flank wear (VBmax), nose wear (VC).
Vrabel et al. (2012) [12]Udimet 720DrillingExperimental input parameters: Feed rate, cutting speed, thrust force. Output parameters: drill flank wear VB and surface roughness.
Table 2. ANN types, network structure, and dataset for network training.
Table 2. ANN types, network structure, and dataset for network training.
Author, YearANN Types, Training Algorithm, Network TopologyDatasetRemarks
Munoz-Escalona, et al. (2010) [15]The radial basis function NN (RBF NN), generalized regression (GRNN) networks, and feed forward back propagation neural network (FFBP NN) were compared.Taguchi design of DoE, L9 orthogonal array.The presented results show a good correlation between the surface roughness and thickness of the chip. FFBP neural network shows better results than radial basis network.
Benardos et al. (2002) [20]FFBP NN
Training algorithm: Levenberg–Marquardt (LM)
Network topology 5-3-1 showed the best performance.
The dataset contained 27 samples (18 used for training, 4 for validation, 5 for testing).ANN was able to predict the surface roughness with a mean squared error (MSE) equal to 1.86%.
Alharthi et al. (2017) [4]FFBP NN
Training algorithms: LM, Momentum
Network topology 3-6-1 showed the best performance.
The experiment was conducted based on a full factorial plan. The dataset contained 64 samples (80% training set, 20% testing, and validation set).This paper analyzed two models: ANN and regression model. The predicted values of Ra were compared to the results of the experiment. Both models could predict Ra with high accuracy. The determination coefficients were 95% for the best neural network and 94% for regression analysis.
Šarić et al. (2013) [13]Several different NN models were analyzed: RBF NN, FFBP NN, modular NN (MNN)
Training algorithm—not stated.
Optimal network topology—not stated.
Not stated.Different learning rules and transfer functions were analyzed. For all three NN types the best results were obtained with the sigmoid transfer function. The Delta learning rule provided the best results for FFBP NN and MNN, whereas the normalized-cumulative delta provided the best results for RBF NN. Results show that all networks could be implemented and efficiently in surface roughness prediction.
Kadirgamaa et al. (2008) [8]RBF NN compared to response surface method (RSM).Not stated.RBF NN can predict the arithmetic mean roughness more accurately than RSM.
Huang et al. (2008) [7]FFBP NN,
Training algorithm—not stated.
Network topology 5-8-7-1 showed the best performance.
The dataset contained 336 samples.After launching the adaptive control function, the arithmetic mean roughness Ra value was smaller.
Karagiannis et al. (2014) [17]FFBP NN
Training algorithm—not stated.
Network topology 8-7-5-4-3 showed the best performance.
The dataset contained 18 samples based on an L18 (21 × 37) orthogonal array.The network had 3 output parameters: Ra, Ry, Rz. Coefficient of correlation during training R = 1, validation R = 0.89, testing R = 0.93. For enhancement of the FFBP NN model, researchers needed to increase the number of trials.
Hossain et al. (2008) [6]FFBP NN
Training algorithm LM
Network topology—not stated.
The dataset contained 27 samples.NN had very good predictive performance.
Al Hazza et al. (2013) [3]FFBP NN
Training algorithm LM
Network topology 3-20-4-4 showed the best performance.
The dataset contained 20 samples, a data ratio of 70:15:15.Predicted and experimental data were in good agreement.
Zain et al. (2010) [18]FFBP NN
Training algorithm: trained (gradient descent with momentum and adaptive learning rule BP).
Network topology 3-1-1 showed the best performance.
The dataset contained 24 samples, with a data ratio of 85:15.NN model could predict Ra using a small dataset. The small number of samples was not a hindrance to obtaining good prediction results.
Vardhan et al. (2018) [22]FFBP NN
Training algorithm—not stated.
Network topology 5-8-8-2 showed the best performance.
The dataset contained 50 samples based on L50 (21 × 54) orthogonal array.The developed ANN network predicted the arithmetic mean roughness Ra and MRR with a deviation of 4.3785% and 17.45823% compared to the test data set.
Eser et al. (2021) [21]FFBP NN
Five training algorithms: Broyden–Fletcher–Goldfarb–Shanno (BFGS), central pattern generator (CPG), Levenberg–Marquardt (LM), resilient backpropagation (RP), scaled conjugate gradient (SCG)
Network topology—not stated.
The dataset contained 27 samples.The results of comparing RSM and ANN models were presented in this research. Both models provided results very close to experimentally obtained ones. The ANN trained using the SCG algorithm showed the best results.
Fang et al. (2016) [16]Two NN models were used: FFBP NN and RBF NN. Training algorithm and network topology—not stated.The dataset contained 45 samples (38 training sets, 7 test sets).This paper presented a comparison of the prediction of the arithmetic mean roughness (Ra) obtained using the RBF and MLP models. The second model provided better results, especially in the prediction of maximum roughness height.
Krishnan et al. (2019) [11]FFBP NN
Training algorithm—not stated.
Network topology 6-10-1 showed the best performance.
The dataset contained 40 samples.The shown methodology used an ANN to detect the errors in the surface roughness of the materials.
Kumar et al. (2015) [14]FFBP NN
Training algorithm—variable learning rate backpropagation (GDX)
The neural network had 2 hidden layers and 10 neurons.
The experiment was conducted based on a full factorial plan. The dataset contained 64 samples.Response surface methodology (RSM) was used for the analysis of the experimental data. ANN prediction and RSM were compared to experimental data. The correlation coefficient of the RMS to the experiment was 0.9972 and that of the ANN to the experiment 0.99571. Results showed that the ANN had a greater deviation than the RSM prediction.
Zhong et al. (2006) [19]FFBP NN
Training algorithm—not stated.
Network topology 7-14-18-2 showed the best performance
The dataset contained 304 samples (274 training sets, 30 testing sets).Surface roughness parameters predicted by the neural network were in good agreement with experimentally obtained ones.
Pal et al. (2005) [9]FFBP NN,
Training algorithm—not stated.
NN 5-5-1 showed the best performance
The dataset had 27 samples; 20 were used for training and 7 for testing.Predicted surface roughness was compared with experimental data and was in good agreement.
Beatrice et al. (2014) [5]FFBP NN
Training algorithm LM. Network topology 3-7-7-1 showed the best performance
in surface roughness prediction.
The dataset contained 27 samples based on the L27 orthogonal array. Out of that number 23 samples were used for training and 4 for testing.A neural network model was developed using a small dataset. Despite this, the model predicted the arithmetic mean roughness with considerable accuracy, since the error between the NN model simulation results and experimentally obtained ones was less than 7%.
Ezugwu et al. (2005) [10]FFBP NN
Training algorithms: LM and Bayesian regularization (BR).
Network topology 5-10-10-7 trained by the BR algorithm showed the best performance.
The dataset contained 102 samples.
LM algorithm dataset ratio: 50:25:25
BR algorithm dataset ratio 67:33.
Two neural networks were made, having one and two layers, 10 and 15 neurons in each, trained by two algorithms, LM and BR. The best results were shown by a neural network trained using the BR algorithm, with two layers and 15 neurons in the hidden layer.
Vrabel et al. (2012) [12]FFBP NN
Training algorithm—not stated.
NN 3-4-1 and 3-5-1 showed the best performance in tool wear prediction and NN
4-6-1 and 4-6-4-1 in surface roughness prediction.
Dataset had 42 samples; 32 were used for training and 10 for testing.NN 3-5-1 was used in the prediction of tool wear. The average RMS error was 12.7%. NN 4-6-4-1 was used for the prediction of surface roughness, with an average RMS error of 2.64%.
Table 3. Experimental factors and their levels.
Table 3. Experimental factors and their levels.
FactorLevel
123
Cutting speed (m/min)120160200
Axial dept of cut (mm)0.60.81
Feed per tooth (mm/tooth)0.050.10.15
Cooling/lubricating techniqueEmulsionDry cutAir
Table 4. The chemical composition of AL 7075 aluminum alloy.
Table 4. The chemical composition of AL 7075 aluminum alloy.
ElementFeSiCuMnMgZnCrTiOther EachAl
%0.50.41.2–20.32.1–2.95.1–6.10.210.210.15Balance
Table 5. Experimental plan—L27 orthogonal array (Taguchi method) [29].
Table 5. Experimental plan—L27 orthogonal array (Taguchi method) [29].
Number of TrialsCutting Speed (m/min)Axial Depth of Cut (mm)Feed Per Tooth (mm/tooth)Cooling/Lubricating TechniquesReplicateAverage Ra (µm)S/N Ratio (dB)
Ra 1 (µm)Ra 2 (µm)Ra 3 (µm)
11200.60.05Emulsion0.3480.3610.3450.35139.085613
21200.60.1Dry machining0.4770.4760.4760.4766.441781
31200.60.15Air cooling0.9360.9250.9260.9290.639686
41200.80.05Dry machining0.2640.2620.2590.26111.68051
51200.80.1Air cooling0.5150.5140.5160.5155.763855
61200.80.15Emulsion0.9730.9710.9730.9720.243697
712010.05Air cooling0.2610.2590.2590.26011.71168
812010.1Emulsion0.5740.6890.6330.6323.98291
912010.15Dry machining0.8680.8650.870.8681.232942
101600.60.05Emulsion0.3270.390.400.3698.665359
111600.60.1Dry machining0.5290.4990.5010.5105.854275
121600.60.15Air cooling0.9390.9290.9350.9340.589963
131600.80.05Dry machining0.3190.3190.3150.3189.960567
141600.80.1Air cooling0.5660.5690.5630.5664.943671
151600.80.15Emulsion1.0821.0881.0831.084−0.70326
1616010.05Air cooling0.3090.3010.3060.30510.30452
1716010.1Emulsion0.8250.8090.8090.8141.783956
1816010.15Dry machining0.9780.9730.9750.9750.216939
192000.60.05Emulsion0.4340.4190.4130.4227.493751
202000.60.1Dry machining0.5580.5980.5750.5774.776484
212000.60.15Air cooling0.9230.930.9190.9240.686561
222000.80.05Dry machining0.30.3060.2960.30010.43829
232000.80.1Air cooling0.530.5140.5190.5215.663246
242000.80.15Emulsion1.0881.0951.0881.090−0.75119
2520010.05Air cooling0.2840.2820.2870.28410.92344
2620010.1Emulsion0.7230.7320.7250.7272.773295
2720010.15Dry machining0.9380.9380.9430.9400.540524
Table 6. S/N ratio of the arithmetic mean roughness (Ra).
Table 6. S/N ratio of the arithmetic mean roughness (Ra).
FactorsLevelDeltaRank
123
Cutting speed5.644.624.731.023
Axial dept of cut4.925.25 *4.830.424
Feed per tooth10.03 *4.670.309.731
Cooling/lubricating techniques3.625.685.69 *2.072
* Optimal level.
Table 7. Optimum factor levels.
Table 7. Optimum factor levels.
SpeedDept of CutFeedCooling/LubricatingS/NRa CalculatedRa Test
Level121311.6190.2624580.2556
Table 8. Results of ANOVA for the arithmetic mean roughness.
Table 8. Results of ANOVA for the arithmetic mean roughness.
FactorDOFSum of SquaresVarianceFPercent
A20.0240.0128.3471.163%
B20.0050.9571.8840.263%
C21.9140.003659.56991.932%
D20.1120.05638.6545.388%
Error180.030.001 1.254%
Total262.08 100%
Table 9. MSE between actual values (output set obtained experimentally) and estimated values (output of NN simulated by input set).
Table 9. MSE between actual values (output set obtained experimentally) and estimated values (output of NN simulated by input set).
No.Training AlgorithmANN ArchitectureMSE
1SCG4 (1) 10.0053
2SCG4 (3) 10.0305
3SCG4 (4) 10.00997
4SCG4 (2-2) 10.0225
5SCG4 (5-2) 10.0063
6SCG4 (5-2-3) 10.0102
7BFG4 (1) 10.0127
8BFG4 (10-4) 10.0172
9BFG4 (10-4-2) 10.015
10GDX4 (1) 10.0043
11GDX4 (3) 10.0194
12GDX4 (8) 10.0680
13RP4 (1) 10.00348
14RP4 (8-4) 10.01593
15BR4 (10) 10.0025
16BR4 (1-1) 10.0028
17LM4 (5-2) 10.00823
18LM4 (10-4-2) 10.01287
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kosarac, A.; Mladjenovic, C.; Zeljkovic, M.; Tabakovic, S.; Knezev, M. Neural-Network-Based Approaches for Optimization of Machining Parameters Using Small Dataset. Materials 2022, 15, 700. https://doi.org/10.3390/ma15030700

AMA Style

Kosarac A, Mladjenovic C, Zeljkovic M, Tabakovic S, Knezev M. Neural-Network-Based Approaches for Optimization of Machining Parameters Using Small Dataset. Materials. 2022; 15(3):700. https://doi.org/10.3390/ma15030700

Chicago/Turabian Style

Kosarac, Aleksandar, Cvijetin Mladjenovic, Milan Zeljkovic, Slobodan Tabakovic, and Milos Knezev. 2022. "Neural-Network-Based Approaches for Optimization of Machining Parameters Using Small Dataset" Materials 15, no. 3: 700. https://doi.org/10.3390/ma15030700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop