Next Article in Journal
NdFeB Magnets Recycling Process: An Alternative Method to Produce Mixed Rare Earth Oxide from Scrap NdFeB Magnets
Next Article in Special Issue
Modeling the Chemical Composition of Ferritic Stainless Steels with the Use of Artificial Neural Networks
Previous Article in Journal
Load Characteristics in Taylor Impact Test on Projectiles with Various Nose Shapes
Previous Article in Special Issue
Optimal Design of Hot-Dip Galvanized DP Steels via Artificial Neural Networks and Multi-Objective Genetic Optimization
Article

Artificial Neural Networks-Based Prediction of Hardness of Low-Alloy Steels Using Specific Jominy Distance

University of Rijeka, Faculty of Engineering, Vukovarska 58, 51000 Rijeka, Croatia
*
Author to whom correspondence should be addressed.
Academic Editors: Diego Celentano, Francesca Borgioli and Lijun Zhang
Metals 2021, 11(5), 714; https://doi.org/10.3390/met11050714
Received: 3 March 2021 / Revised: 2 April 2021 / Accepted: 20 April 2021 / Published: 27 April 2021

Abstract

Successful prediction of the relevant mechanical properties of steels is of great importance to materials engineering. The aim of this research is to investigate the possibility of reducing the complexity of artificial neural networks-based prediction of total hardness of hypoeutectoid, low-alloy steels based on chemical composition, by introducing the specific Jominy distance as a new input variable. For prediction of total hardness after continuous cooling of steel (output variable), ANNs were developed for different combinations of inputs. Input variables for the first configuration of ANNs were the main alloying elements (C, Si, Mn, Cr, Mo, Ni), the austenitizing temperature, the austenitizing time, and the cooling time to 500 °C, while in the second configuration alloying elements were substituted by the specific Jominy distance. Comparing the results of total hardness prediction, it can be seen that the ANN using the specific Jominy distance as input variable (runseen = 0.873, RMSEunseen = 67, MAPE = 14.8%) is almost as successful as ANN using main alloying elements (runseen = 0.940, RMSEunseen = 46, MAPE = 10.7%). The research results indicate that the prediction of total hardness of steel can be successfully performed only based on four input variables: the austenitizing temperature, the austenitizing time, the cooling time to 500 °C, and the specific Jominy distance.
Keywords: low-alloy steels; quenching; mechanical properties; hardness; artificial neural networks low-alloy steels; quenching; mechanical properties; hardness; artificial neural networks

1. Introduction

The mechanical behavior of steels determines their usefulness in a variety of applications. Different loads that materials experience in their application make it necessary to identify the limiting values that can be withstood without failure or permanent deformation [1]. Knowledge of the mechanical behavior of steels during manufacturing processes (such as heat treatment) is also necessary since it directly influences the mechanical properties of steel components.
Quenching is a common heat treatment process usually implemented to produce steel components with reliable service properties. The most common use of quenching is the hardening of steel. Although quenching is a vital part of production of highly loaded steel components and load-carrying machine elements, it is also one of the major causes of rejected components and production losses due to uncontrollable distortion, residual stress and cracking of steel component [2]. Therefore, the heat treatment industry needs computer modeling of the quenching processes to control and to optimize the process parameters with a purpose to achieve desired distribution of microstructure and properties, and to avoid cracking and reduce distortion of final parts.
Mechanical properties of steel are directly related to its mechanical behavior during heat treatment. Therefore, successful prediction of the relevant mechanical properties is the first step in predicting the mechanical behavior of steel during and after heat treatment, such as resistance to fracture and distortions [3,4,5].
When quenching is considered, mechanical properties of steel mostly depend on microstructure constituents and temperature evolution during the treatment. Variation of temperature at any point in the component is the major driving force for phase transformations. Upon cooling, the thermodynamic stability of the parent phase is altered, which results in the decomposition of austenite into transformation products. Transformation rate depends on the temperature and the cooling rate. Consequently, by changing the cooling rate, a wide range of mechanical properties can be obtained in steel parts.
However, there are other factors such as alloying elements, grain size refinement, internal stresses, microstructure heterogeneity and crystal imperfections [6,7,8,9] which also significantly affect the resulting mechanical properties that should be considered in prediction of mechanical properties of quenched parts.
Prediction of mechanical properties is usually based on semi-empirical methods. In [10], the equations for prediction of microstructure constituents’ hardness after the isothermal decomposition of austenite are proposed. Developed model can be a very good basis for predicting hardness in continuous steel cooling using Scheil’s additivity rule. One of the most common semi-empirical methods for predicting hardness during continuous cooling is based on the continuous cooling transformation (CCT) diagrams [11]. Additionally, distribution of hardness in quenched steel can be predicted using the Jominy test results by transferring the results from the Jominy curve to the hardness values for different points of the steel specimen [4,12].
Artificial neural networks (ANNs) are empirical methods very useful in modeling, estimating, predicting and process control in different science and technology fields. The neural network concept proved to be powerful and versatile for predicting materials’ properties based on chemical composition and/or other variables, particularly in cases when some of the influences are unknown, as well as for solving many complex phenomena for which physical models do not exist. For example, in [13] authors show how a hybrid strategy, which combines decision trees and ANNs, can be used for accurate and reliable prediction of ore crushing plate lifetimes. In [14], region convolutional neural networks are used for automatic detection of steel surface defects in product quality control. In [15], authors show the application of neural networks to a cyclic elastoplastic material as well as to a more complex thermo-viscoplastic steel solidification model. In [16,17,18], ANNs address the problem of extracting the Jominy hardness profiles of steels directly from the chemical composition. In [19,20], ANN is applied for predicting microstructure composition of quenched steel based on chemical composition, the austenitizing temperature, the austenitizing time, the cooling time from 800 to 500 °C, the austenite grain size and the total hardness of quenched steel, while in [21], ANN is applied for predicting as-welded HAZ hardness based on the cooling time from 800 to 500 °C and chemical composition of steel. In [22,23,24,25,26], ANNs are successfully applied for predicting mechanical properties of steels and steel microconstituents.
Due to the high accuracy of the prediction results, as compared with results of mathematical modeling and regression analysis, ANNs have also continued to develop in recent years. Among the neural networks used, the feed-forward neural network with the backpropagation learning algorithm (BPNN) is commonly used [27,28].
Hardness, which is considered to be one of the fundamental mechanical properties of a material, has also great importance for application. The resulting hardness of steels is mainly dependent on their chemical composition; however, hardenability of steels and the heat treatment parameters, such as temperature, time and cooling rate, which can control the microstructure and crystal grain size, are also related to the hardness.
Hardness is often used as a basis for prediction of various elastic and plastic stress-strain as well as fatigue properties of steel [12,29,30,31]. Hardness prediction is very important in heat treatment of steel. Based on hardness, it is possible to predict other mechanical properties of steel after quenching and tempering, such as tensile strength, yield strength, elongation and contraction [32]. Consequently, the ability to predict hardness and its distribution in heat-treated steel parts with improved accuracy, opens possibility to also predict advanced material properties better.
The aim of this research is to investigate the possibility of reducing the complexity of artificial neural networks-based prediction of total hardness, HVtot, of hypoeutectoid, low-alloy steels, based on detailed chemical composition, by introducing the specific Jominy distance, Ed, as a new input variable, and provide another approach to prediction when detailed chemical composition is unknown.
For that purpose, ANNs were developed for different combinations of inputs. Input variables for the first configuration of ANNs were the main alloying elements (C, Si, Mn, Cr, Mo, Ni), the austenitizing temperature, the austenitizing time, and the cooling time to 500 °C, while in the second configuration alloying elements were substituted by the specific Jominy distance. In total 423 datasets of 24 hypoeutectoid, low-alloy steels were used to develop and test ANNs.

2. Materials and Data

2.1. Materials

Hypoeutectoid, low-alloy steels that present study deals with, are steels with less than ~0.8 wt. % of carbon and containing alloying elements, including carbon, up to a total content of about 5.0 wt. %. Low-alloy steels with suitable alloy compositions have greater hardenability than structural carbon steels and, thus, can provide high strength and good toughness in heat-treated thicker sections. Carbon is the main hardening element in all steels except the austenitic precipitation hardening steels, maraging steels and interstitial-free steels. The strengthening effect of carbon in steels consists of solid solution strengthening and carbide dispersion strengthening [29].
Experimental materials data were collected from the relevant literature [33], for 24 hypoeutectoid, low-alloy steels, continuously cooled from austenite range to the room temperature.
Table 1 lists the chemical composition (wt. %) of studied steels. Low-alloy steels in Table 1 are steels for case hardening based on carburizing (Data No. 1–8) and steels for quenching and tempering (Data No. 9–24). In addition to carbon, the main alloying elements in these steels are silicon, manganese, chromium, molybdenum and nickel. Addition of silicon improves a yield strength, while addition of manganese, chromium, molybdenum and nickel to steels improves their hardenability. Also, if combined with manganese or molybdenum, silicon may produce greater hardenability of steels [29].

2.2. Input Variables and Data

The hardness of steels for case hardening based on carburizing and steels for quenching and tempering, depends mainly on volume fraction of steel microconstituents, i.e., martensite, bainite and ferrite-pearlite mixture. Higher cooling rates during the cooling of steel from austenitizing temperature results in higher volume fraction of hard martensite, while lower cooling rates result in higher volume fraction of soft ferrite-pearlite mixture. Therefore, volume fraction of those microconstituents and hardness depend mainly on cooling rate of steel. Cooling rate during the cooling from austenitizing temperature is adequately defined by cooling time from austenitizing temperature to temperature of 500 °C. This is further confirmed by the fact that prediction of the as-quenched hardness of steel based on cooling time from 800 °C to 500 °C is well known in the literature and practice, where as-quenched hardness at different workpiece points is estimated by the conversion of the cooling time to the hardness. This conversion is provided by the relationship between the cooling time and distance from the quenched end of the Jominy test specimen [4,12], making the cooling time to 500 °C, t500, good candidate for one of the main input variables in ANN for prediction of hardness.
An equally important factor influencing the steel hardness is hardenability of steel. In general, steels with lower diffusion of carbon and other alloying elements have higher hardenability. Different alloying elements, more or less suppress the diffusional pearlitic and bainitic transformations in steel, which means that different alloying elements can influence the volume fraction of microconstituents. Hardenability of steel can be involved in prediction model by taking into account the chemical composition. Additionally, hardenability of steel can be involved in prediction model by taking into account the specific Jominy distance, Ed, which depends on chemical composition of steel and corresponds to the Jominy distance when 50% of the microstructure is martensite (Figure 1) [4,12,34].
Distance Ed can be determined/estimated from the Jominy curve based on hardness of steel with 50% martensite in the microstructure, HRC50%M [35]:
H R C 50 % M = 44 c 0 + 14 ,
where c0 is the mass fraction of carbon in the steel.
Since specific Jominy distance is related to hardenability, it can be assumed that artificial neural networks-based prediction of hardness using specific Jominy distance instead of chemical composition can be applied in case when detailed chemical composition is unknown, and as additional model of prediction of hardness of heat-treated steels.
Another factor influencing the steel hardness after the cooling from austenitizing temperature are austenitizing temperature, Ta, and austenitizing time, ta, which influence the austenite grain size. The higher austenitizing temperature and longer austenitizing time result in austenite grain growth and solubility of carbon and other alloying elements in austenite, which influence the kinetic of austenite decomposition and hardness of steel.
It is worth noting that besides the chemical composition, mechanical properties of studied steels significantly depend on the crystal grain size. With an increase in the heating temperature or holding time in the austenite range, the grains begin to grow intensively. However, since the majority of used CCT diagrams do not provide information about the austenite grain size, and for the sake of practicality, this parameter was not selected as input variable in this research.
Based on the previous discussion and arguments, prediction of total hardness, HVtot, by artificial neural networks was designed to be based on 10 input variables: main alloying elements, heat treatment parameters and the specific Jominy distance, as shown in detail in Table 2.
The data was acquired from the results of experiments obtained from the literature [33]. For each steel listed in Table 1, between 8 and 13 datasets were collected which, in addition to the main alloying elements, contain information on heat treatment parameters such as austenitizing temperature, Ta, austenitizing time, ta, cooling time to 500 °C, t500, the specific Jominy distance, Ed, and total hardness after continuous cooling, HVtot. In total, 423 datasets were collected. Table 3 shows sample dataset for steel 42CrMo4 (data No. 19 in Table 1). All datasets are provided in supplement material accompanying the paper (Table S1).

3. Methods

Development of Artificial Neural Networks for Prediction of Total Hardness after Continuous Cooling, HVtot

Artificial neural networks are flexible, nonlinear computational models that can be successfully used in function approximation problems in various areas of research, thus also for prediction/estimation of material properties based on heat treatment parameters. Artificial neural networks are inspired by biological neural networks. They consist of highly or fully connected artificial neurons that are divided into three layers—input layer (one neuron for each input variable), hidden layers (one or more neurons in one or more hidden layers) and output layer (one neuron for each output variable), which are connected by weights. Considering the features of the investigated problem, two-layered artificial neural network (Figure 2) was used in this research.
For the given problem i.e., prediction of total hardness after continuous cooling, HVtot, several two-layer multilayer perceptrons (MLP) with hyperbolic tangent transfer function in the hidden layer and linear transfer function in the output layer were developed. According to [36], this kind of artificial neural network can approximate any arbitrary function well. Common algorithm for supervised learning with MLP, as is the case in this research, is the backpropagation algorithm. The main goal of artificial neural network development is to adjust weights so the error function, in this case the mean square error, MSE, is minimal i.e., that output values (predicted hardness values, HVtot,pred) are close to target values (experimental hardness values, HVtot). Once the initial values of weights are set, those input signals move layer by layer from the input to the output layer. This is called the forward phase. Forward phase is followed by backpropagation i.e., the backward phase, in which the error signals, obtained by comparison of target and output values, move layer by layer, only now from the output layer to the input layer. In the backward phase, weights in the ANN are adjusted to minimize error function, as previously explained. This error correction learning stops when a certain criterion is met.
In this research, artificial neural networks were developed for different combinations of input variables while every MLP had only one output variable—total hardness after continuous cooling, HVtot. Input variables for the first configuration of artificial neural networks were the main alloying elements, austenitizing temperature, Ta, austenitizing time, ta, and cooling time to 500 °C, t500, while in the second configuration alloying elements were substituted by specific Jominy distance, Ed (as given in Table 4). In total 423 datasets of 24 steels were used to develop and test artificial neural networks. For this research computer software, MATLAB R2020b (MathWorks Inc., Natick, MA, USA) was used.
Robustness of ANNs can be ensured through preventing overlearning and overfitting of the ANNs, and most importantly, by checking the ANNs performance on unseen data (data that were not used for ANN development). Out of 423 datasets, about 10% of data (41 datasets) were used as new, “unseen” data, those which were used for an unbiased evaluation of particular artificial neural network performance. The “unseen” data were randomly chosen, but in a way to represent the whole “population”. Since artificial neural networks learn by example, they should never be used to extrapolate data.
Overlearning is another caveat of learning-by-example principle and must be addressed properly to improve generalization—performance of the network on new, “unseen” data. This was taken into account by combining early stopping as a method for improving generalization, and the “growth method” for determination of the number of neurons in hidden layer. Early stopping means that weights are adapted for training dataset while error function (in this case mean square error MSE) is calculated for validation dataset. When value of MSE on validation dataset reaches minimum (hopefully a global minimum), and then increases for a predefined number of epochs, training i.e., learning of ANN is stopped.
To prevent overfitting maximum number of neurons in hidden layer, H, for which the particular design was trained, is determined depending on number of available training examples, Ntrain, number of input variables, I, and number of output variables, O:
H O N train 1 I + O + 1 .
Networks were trained using Levenberg–Marquadt algorithm with early stopping, for different combinations of input variables as listed in Table 4, and hidden layer size from one neuron to H neurons (“growth method”). For each architecture, 10 networks were trained with random initial weights and data divisions.
Initial weights set a starting point for training of the ANN and if this is done randomly, for 10 times, the odds that said starting point is determined well, is increased. Furthermore, Levenberg–Marquadt algorithm usually requires data division into training, validation and testing datasets. Random data division is important because even with the data randomly divided into training, validation and testing datasets, it can happen that these three ‘groups’ are not sufficiently representative of the whole population, and if data are properly chosen it is assumed that the entire population is represented. Common ratio of this data division is 70/15/15, respectively, so it was also used in this research. According to the mentioned data division, if N is the total number of datasets used for development of ANNs, number of training examples, Ntrain, is:
N train = 0.7 N .
Number of Ntrain was constant for all networks, regardless of the number of input variables and architecture. Output variable was always only HVtotal, which gives number of outputs, O = 1.
Concerning the number of unknowns, i.e., weights in fully connected ANN:
N w = I + 1 H + H + 1 O .
For the “worst” case scenario, when ANN has maximum number of inputs (9), number of weights is 24, while the number of unknown weights (network hyperparameters) is 265. The number of degrees of freedom, Ndof, of a network is a difference between the number of training examples, Ntrain, and the number of weights, Nw:
N dof = N train N w .
The number of degrees of freedom, Ndof, of a network should always be greater than 0, and the above mentioned “worst” case scenario yields Ndof = 2. These are extreme scenarios/ANN architectures, which were investigated to determine the most suitable hidden layer size, but also to check what is obtained with overfit networks. Since number of weights should be a lot smaller than number of training examples, i.e., Nw << Ntrain, or if we presume that the number of training, Ntrain, should be 4–5 times greater than the number of unknown weights, Nw, we obtain that for 9 inputs the maximum number of neurons in hidden layer, H is, when rounded, 5 or 6.
Out of all architectures (hidden layer size H) and for both configurations of input variables, the best artificial neural networks were selected based on the value of coefficients of correlation for whole dataset and test dataset, r and rtest respectively, and the value of root mean square error, RMSE and RMSEtest, again for both whole dataset and test dataset, respectively. Root mean square error, RMSE, is the square root of the MSE, and the good measure of model accuracy since it gives prediction errors of different models in the same unit as the variable that is to be predicted. The goal of the network is to provide maximal r and rtest, and minimal RMSE and RMSEtest. If several networks had similar results, the one with smaller hidden layer size was chosen. Results of chosen architectures for both configurations are given in Table 5. In accordance to previously explained criteria it can be concluded that selected networks are not overfit.

4. Results

To provide comparable measure with results published in relevant literature, performance of selected artificial neural networks was evaluated using the mean absolute percentage error, MAPE, calculated using Equation (6):
M A P E = 1 n i = 1 n e i t i ,
where ei are prediction errors and ti are network target values i.e., experimental values of HVtot. MAPE value is usually interpreted as a forecasting (or prediction) goodness indicator, and usually, if it is under 10% it can be said that it indicates highly accurate forecasting. In some cases, it is taken as the indicator of the generalization capability of the model.
However, MAPE should not be used as the sole indicator of the goodness of the model. To determine generalization capability of the model, in this case ANNs, it is useful to calculate statistics/indicators that were used for selection of ANN, also for unseen data—i.e., data that were not used in development and selection of ANN (training, validation, test data). To obtain completely unbiased evaluation of artificial neural networks performance, for each chosen network, besides MAPEunseen, coefficient of correlation runseen between targets and outputs (i.e., experimental and predicted values) and root mean square error RMSEmean,unseen were calculated for the same set of the new, “unseen” data. Results are given in Table 6.
It can be seen from Table 5 and Table 6 that runseen and RMSEmean,unseen are comparable or even better than the values obtained for the dataset used for development and selection of ANN designs. MAPE values were also calculated for development dataset (13% for Configuration No. 1 and 15.7% for Configuration No. 2) and when compared to MAPEunseen (Table 6) it can be seen that those values do not differ significantly. Based on all three indicators it can be concluded that the ANN designs are robust, and that generalization capability of the ANNs, as well as the expected forecasting/prediction are promising.
Scatter diagrams in Figure 3 and Figure 4 provide information on relation between experimental and predicted values of total hardness after continuous cooling, HVtot, obtained by selected artificial neural networks for development and “unseen” data. Results for development data are included not as an absolute performance indicator, but more to confirm the consistence of the performance and the results obtained for the development and “unseen” dataset.
Data points shown by scatter diagram in Figure 3a and Figure 4a are obtained using artificial neural network that was developed with main alloying elements, austenitizing temperature, Ta, austenitizing time, ta, and cooling time to 500 °C, t500, as input variables (configuration No. 1). Artificial neural network used to obtain data points in the scatter diagram in Figure 3b and Figure 4b (configuration No. 2) was developed using specific Jominy distance, Ed, as substitution for main alloying elements as input variables, while other inputs were the same as in configuration No. 1.
From diagrams in both Figure 3 and Figure 4, no significant deviation of data to either side of ideal regression line (r = 1) can be observed. Scatter of data points is similar in both diagrams, with configuration No. 2 Figure 3b and Figure 4b showing somewhat tighter distribution around the ideal line and less pronounced outlying values.
From both diagrams in Figure 3, a grouping of datapoints related to certain constant values of predicted total hardness can be observed. Two major values/levels in addition to a few less pronounced ones are observable in Figure 3a which is related to configuration No. 1 where main alloying elements were used as inputs. In Figure 3b, related to configuration No. 2 developed using specific Jominy distance, single such value/level is notable. These “levels”, prevalently present at lower values of hardness, are related to ferrite-pearlite microstructure which is achieved with low to very low cooling rates i.e., very high cooling times to 500 °C [8,11,32]. After certain, relatively high value of cooling time, its additional increase no longer results in notable changes i.e., differences in hardness of resulting microstructure as it becomes prevalently that of ferrite type—an effect which developed artificial neural networks correctly captures.
To further evaluate predictive accuracy of artificial neural networks for prediction of total hardness after continuous cooling, HVtot, deviations of predicted values from their experimentally obtained counterparts were used as relevant indicators. Deviations up to ±5, ±(5…10), ±(10…15) and ±(15…20)% were used for evaluation of predictions of HVtot and shown in Figure 5 for “unseen” dataset.
Both configurations of artificial neural networks show similar performance. ANN that uses main alloying elements as input variables (configuration No. 1) is somewhat more successful than the network using specific Jominy distance (configuration No. 2) as input variable. However, for both configurations the same amount (34%) of predicted HVtot values deviate up to ±5% from the experiment-based counterparts. Configuration No. 1 is more successful in the deviation range ±(5…10)%, while configuration No. 2 is more successful in the deviation range ±(10…15)% of data. When comparing deviations up to ±20%, about 87% and 73% of the predicted data fall within that range, when predicted with configuration No. 1 and configuration No. 2, respectively.

5. Discussion

An analysis of previous research whose results are available in the literature shows that the most of ANNs for hardness prediction of steels after continuous cooling are based on chemical composition, heat treatment parameters and cooling time as input variables. Due to the large number of input variables and the fact that in certain cases detailed chemical composition of steel is not available, in this paper it is proposed for the first time to replace the chemical composition with the specific Jominy distance, as a new input variable, which corresponds to the Jominy distance when 50% of the microstructure is martensite.
In addition to the specific Jominy distance, other input variables used to predict total hardness after continuous cooling of hypoeutectoid, low-alloy steels for case hardening based on carburizing and steels for quenching and tempering are austenitizing temperature, austenitizing time, and cooling time to 500 °C. Austenitizing temperature and austenitizing time influence the prior austenite grain size and solubility of carbon and other alloying elements in austenite, and thus the microstructure and mechanical properties of steel after continuous cooling. With an increase in the heating temperature or holding time in the austenite range, the austenite grains begin to grow intensively, which lead to a coarse-grain structure of steel (ferrite-perlite, bainite, martensite), characterized by lower mechanical properties.
The main driving force of phase transformations is the change of thermodynamic instability caused by temperature change. With undercooling of steel below the critical temperature, the thermodynamic stability of a primary microstructure is disrupted, resulting in austenite decomposition into ferrite, pearlite, bainite and martensite. The volume fraction and hardness of these microconstituents and thus the total hardness of steel after continuous cooling depend mainly on cooling rate, which could be adequately replaced by the cooling time.
Comparing the results of total hardness prediction after continuous cooling, it can be seen that the ANN using main alloying elements as input variables (Table 4, configuration No. 1) is somewhat more successful than the ANN using the specific Jominy distance as input variable (configuration No. 2). It is also important to notice (Figure 5) that in the deviation range up to ±5% the ANN using the specific Jominy distance has equal prediction abilities.
Therefore, the research results indicate that the prediction of total hardness of steel can be successfully performed only based on four input variables: the austenitizing temperature, the austenitizing time, the cooling time to 500 °C and the specific Jominy distance.
From the aspect of materials and input variables, further research could be directed to dividing the investigated steels into two groups: 1. steels for case hardening based on carburizing, and 2. steels for quenching and tempering. Also, with the aim of achieving even better results, further research could be directed to involving the maximal achievable hardness of steel and/or ranges of average values of individual chemical elements pertaining to typical composition of steels in prediction model. The maximal achievable hardness is available from Jominy curve of investigated steels.

6. Conclusions

In this paper, artificial neural network-based prediction of total hardness of hypoeutectoid, low-alloy steels using the specific Jominy distance, Ed, have been proposed. The main goal of this research was to check whether chemical composition (C, Si, Mn, Cr, Mo, Ni (in wt. %) as input variables) can be substituted with specific Jominy distance, Ed, and provide another approach to prediction when detailed chemical composition is unknown, and simpler models that predict total hardness sufficiently accurate.
The following conclusions can be reached.
  • Both configurations of artificial neural networks show similar performance. ANN using the specific Jominy distance as input variable (runseen = 0.873, RMSEunseen = 67, MAPE = 14.8%) is almost as successful as ANN using main alloying elements (runseen = 0.940, RMSEunseen = 46, MAPE = 10.7%).
  • The prediction results indicate that the ANN designs are robust, and that generalization capability of the ANNs, as well as the expected forecasting/prediction are promising.
  • The prediction of total hardness of steel can be successfully performed only based on four input variables: the austenitizing temperature, the austenitizing time, the cooling time to 500 °C, and the specific Jominy distance.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/met11050714/s1, Table S1: Data used for development and testing of artificial neural networks.

Author Contributions

Conceptualization, S.S.H. and D.I.; methodology, S.S.H., T.M. and D.I.; software, T.M.; validation, S.S.H., T.M., D.I. and R.B.; formal analysis, T.M. and R.B.; investigation, S.S.H. and D.I.; writing, S.S.H., T.M., D.I. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by Croatian Science Foundation under the project IP-2020-02-5764 and by the University of Rijeka under the project number uniritehnic-18-116.

Data Availability Statement

Datasets used for developing and testing of artificial neural networks are available in the supplementary material (Table S1).

Acknowledgments

The authors wish to gratefully acknowledge support by Croatian Science Foundation under the project IP-2020-02-5764 and by the University of Rijeka under the project number uniritehnic-18-116.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abbaschian, R. Mechanical Testing. In Characterization of Materials, 1st ed.; Kaufmann, E.N., Ed.; Wiley-Interscience: Hoboken, NJ, USA, 2003; Volumes 1–2, pp. 279–336. [Google Scholar]
  2. Narazaki, M.; Totten, G.E. Distortion of Heat-Treated Components. In Steel Heat Treatment Handbook: Metallurgy and Technologies, 2nd ed.; Totten, G.E., Ed.; Taylor & Francis Group, CRC Press: Boca Raton, FL, USA, 2007; pp. 607–650. [Google Scholar]
  3. Şimşir, C.; Gür, C.H. Simulation of Quenching. In Handbook of Thermal Process Modeling of Steels; Gür, C.H., Pan, J., Eds.; Taylor & Francis Group, CRC Press: Boca Raton, FL, USA, 2009; pp. 341–425. [Google Scholar]
  4. Smoljan, B.; Iljkić, D.; Totten, G.E. Mathematical Modeling and Simulation of Hardness of Quenched and Tempered Steel. Metall. Mater. Trans. B 2015, 46, 2666–2673. [Google Scholar] [CrossRef]
  5. Cahoon, J.R.; Broughton, W.H.; Kutzak, A.R. The determination of yield strength from hardness measurements. Metall. Trans. 1971, 2, 1979–1983. [Google Scholar] [CrossRef]
  6. Bain, E.C. Functions of the Alloying Elements in Steel; American Society for Metals: Cleveland, OH, USA, 1939. [Google Scholar]
  7. Krauss, G. Steels: Heat Treatment and Processing Principles, 2nd ed.; ASM International: Materials Park, OH, USA, 1990; pp. 1–16. [Google Scholar]
  8. Sverdlin, A.V.; Ness, A.R. Fundamental Concepts in Steel Heat Treatment. In Steel Heat Treatment: Metallurgy and Technologies, 2nd ed.; Totten, G.E., Ed.; Taylor & Francis Group, CRC Press: Boca Raton, FL, USA, 2007; pp. 121–164. [Google Scholar]
  9. Colpaert, H. Metallography of Steel–Interpretation of Structure and the Effects of Processing; ASM International: Materials Park, OH, USA, 2018; pp. 193–352. [Google Scholar]
  10. Smokvina Hanza, S.; Smoljan, B.; Štic, L.; Hajdek, K. Prediction of Microstructure Constituents’ Hardness after the Isothermal Decomposition of Austenite. Metals 2021, 11, 180. [Google Scholar] [CrossRef]
  11. Liščić, B. Steel Heat Treatment. In Steel Heat Treatment Handbook: Metallurgy and Technologies, 2nd ed.; Totten, G.E., Ed.; Taylor & Francis Group, CRC Press: Boca Raton, FL, USA, 2007; pp. 277–414. [Google Scholar]
  12. Smoljan, B.; Iljkić, D.; Smokvina Hanza, S.; Jokić, M.; Štic, L.; Borić, A. Mathematical Modeling and Computer Simulation of Steel Quenching. Mater. Perform. Charact. 2019, 8, 17–36. [Google Scholar] [CrossRef]
  13. Juez-Gil, M.; Erdakov, I.N.; Bustillo, A.; Pimenov, D.Y. A regression-tree multilayer-perceptron hybrid strategy for the prediction of ore crushing-plate lifetimes. J. Adv. Res. 2019, 18, 173–184. [Google Scholar] [CrossRef]
  14. Wang, S.; Xia, X.; Ye, L.; Yang, B. Automatic Detection and Classification of Steel Surface Defect Using Deep Convolutional Neural Networks. Metals 2021, 11, 388. [Google Scholar] [CrossRef]
  15. Abueidda, D.W.; Koric, S.; Sobh, N.A.; Sehitoglu, H. Deep learning for plasticity and thermo-viscoplasticity. Int. J. Plast. 2021, 136, 102852. [Google Scholar] [CrossRef]
  16. Colla, V.; Reyneri, L.M.; Sgarbi, M. Neuro-Wavelet parametric characterization of Jominy profiles of steels. Integr. Comput. Aided Eng. 2000, 7, 217–228. [Google Scholar] [CrossRef]
  17. Doane, D.V.; Kirkaldy, J.S. (Eds.) Hardenability Concepts with Applications to Steel, Proceedings of a Symposium Held at the Sheraton-Chicago Hotel, October 24–26, 1977; Metallurgical Society of AIME: Warrendale, PA, USA, 1978. [Google Scholar]
  18. Vermeulen, W.G.; van der Wolk, P.J.; de Weijer, A.P.; van der Zwaag, S. Prediction of Jominy Hardness Profiles of Steels Using Artificial Neural Networks. J. Mater. Eng. Perform. 1996, 5, 57–63. [Google Scholar] [CrossRef]
  19. Smoljan, B.; Smokvina Hanza, S.; Filetin, T. Prediction of Phase Transformation Using Neural Networks. In Proceedings of the 2nd International Conference Heat Treatment and Surface Engineering in Automotive Applications, Riva del Garda, Italy, 20–22 June 2005. [Google Scholar]
  20. Smokvina Hanza, S.; Iljkić, D.; Tomašić, N. Modelling of Microstructure Transformation during the steel quenching. In Proceedings of the 4th International PhD Conference on Mechanical Engineering, Pilsen, Czech Republic, 11–13 September 2006. [Google Scholar]
  21. Chan, B.; Bibby, M.; Holtz, N. Predicting HAZ hardness with artificial neural networks. Can. Metall. Q. 1995, 34, 353–356. [Google Scholar] [CrossRef]
  22. Dobrzanski, L.A.; Kowalski, M.; Madejski, J. Methodology of the mechanical properties prediction for the metallurgical products from the engineering steels using the artificial intelligence methods. J. Mater. Process. Technol. 2005, 164-165, 1500–1509. [Google Scholar] [CrossRef]
  23. Kusiak, J.; Kuziak, R. Modelling of microstructure and mechanical properties of steel using the artificial neural network. J. Mater. Process. Technol. 2002, 127, 115–121. [Google Scholar] [CrossRef]
  24. Yilmaz, M.; Metin Ertunc, H. The prediction of mechanical behavior for steel wires and cord materials using neural networks. Mater. Des. 2007, 28, 599–608. [Google Scholar] [CrossRef]
  25. Song, R.G.; Zhang, Q.Z. Heat treatment optimization for 7175 aluminum alloy by genetic algorithm. Mater. Sci. Eng. C 2001, 17, 133–137. [Google Scholar] [CrossRef]
  26. Trzaska, J. Neural networks model for prediction of the hardness of steels cooled from the austenitizing temperature. Arch. Mater. Sci. Eng. 2016, 82, 62–69. [Google Scholar] [CrossRef]
  27. Wang, Y.; Wu, X.; Li, X.; Xie, Z.; Liu, R.; Liu, W.; Zhang, Y.; Xu, Y.; Liu, C. Prediction and Analysis of Tensile Properties of Austenitic Stainless Steel Using Artificial Neural Network. Metals 2020, 10, 234. [Google Scholar] [CrossRef]
  28. Narayana, P.L.; Kim, J.H.; Maurya, A.K.; Park, C.H.; Hong, J.-K.; Yeom, J.-T.; Reddy, N.S. Modeling Mechanical Properties of 25Cr-20Ni-0.4C Steels over a Wide Range of Temperatures by Neural Networks. Metals 2020, 10, 256. [Google Scholar] [CrossRef]
  29. Liščić, B. Hardenability. In Steel Heat Treatment Handbook: Metallurgy and Technologies, 2nd ed.; Totten, G.E., Ed.; Taylor & Francis Group, CRC Press: Boca Raton, FL, USA, 2007; pp. 213–276. [Google Scholar]
  30. Roessle, M.L.; Fatemi, A. Strain-controlled fatigue properties of steels and some simple approximations. Int. J. Fatigue 2000, 22, 495–511. [Google Scholar] [CrossRef]
  31. Basan, R.; Franulović, M.; Smokvina Hanza, S. Estimation of cyclic stress-strain curves for low-alloy steel from hardness. Metallurgy 2010, 49, 83–86. [Google Scholar]
  32. Just, E. Hardening and tempering—Influencing steel by hardening. Vdi Ber. 1976, 256, 125–140. (In German) [Google Scholar]
  33. Rose, A.; Hougardy, H. Atlas zur Wärmebehandlung der Stähle; Verlag Stahleisen: Düsseldorf, Germany, 1972. [Google Scholar]
  34. Smoljan, B.; Iljkić, D.; Traven, F. Mathematical modelling of mechanical properties of quenched and tempered steel. Int. Heat Treat. Surf. Eng. 2013, 7, 16–22. [Google Scholar] [CrossRef]
  35. Iljkić, D. A Contribution to the Development of the Mechanical Properties Prediction of Quenched and Tempered Steel and Cast Steel. Ph.D. Thesis, University of Rijeka, Rijeka, Croatia, 2010. (In Croatian). [Google Scholar]
  36. Hagan, M.T.; Demuth, H.B.; Beale, M.H. Neural Network Design, 2nd ed.; Oklahoma State University: Stillwater, OK, USA, 2014. [Google Scholar]
Figure 1. Specific Jominy distance.
Figure 1. Specific Jominy distance.
Metals 11 00714 g001
Figure 2. Fully connected multilayer perceptron with one hidden layer.
Figure 2. Fully connected multilayer perceptron with one hidden layer.
Metals 11 00714 g002
Figure 3. Scatter diagram of total hardness after continuous cooling HVtot values obtained by selected artificial neural networks of (a) configuration No. 1 and (b) configuration No. 2, on development data.
Figure 3. Scatter diagram of total hardness after continuous cooling HVtot values obtained by selected artificial neural networks of (a) configuration No. 1 and (b) configuration No. 2, on development data.
Metals 11 00714 g003
Figure 4. Scatter diagram of total hardness after continuous cooling HVtot values obtained by selected artificial neural networks of (a) configuration No. 1 and (b) configuration No. 2, on “unseen” data.
Figure 4. Scatter diagram of total hardness after continuous cooling HVtot values obtained by selected artificial neural networks of (a) configuration No. 1 and (b) configuration No. 2, on “unseen” data.
Metals 11 00714 g004
Figure 5. Percentage of HVtot values predicted by artificial neural networks that deviate up to ±5, ±(5…10), ±(10…15) and ±(15…20)% from experiment-based values.
Figure 5. Percentage of HVtot values predicted by artificial neural networks that deviate up to ±5, ±(5…10), ±(10…15) and ±(15…20)% from experiment-based values.
Metals 11 00714 g005
Table 1. Chemical composition of studied hypoeutectoid, low-alloy steels (balance Fe).
Table 1. Chemical composition of studied hypoeutectoid, low-alloy steels (balance Fe).
Data No.Designation (DIN)Chemical Composition, wt. %
CSiMnPSAlCrCuMoNiV
1.Ck150.150.220.410.0210.024<0.0050.060.15-0.06-
2.Ck15 10.300.290.390.0120.0260.0030.120.215---
3.16MnCr50.160.221.120.0300.0080.0150.99-0.020.120.01
4.15CrNi60.130.310.510.0230.0090.0101.50-0.061.55<0.01
5.20MoCr4 10.280.300.660.0180.0110.0490.560.180.440.15-
6.20MoCr4 10.570.300.660.0180.0110.0490.560.180.440.15-
7.25MoCr4 10.310.200.670.0170.0220.0340.50-0.450.11-
8.20NiMoCr6 10.280.150.620.0150.0200.0150.47-0.481.58-
9.Ck450.440.220.660.0220.029-0.15---0.02
10.37MnSi50.381.051.140.0350.019-0.23---0.02
11.42MnV70.430.281.670.0210.008-0.320.060.030.110.10
12.34Cr40.350.230.650.0260.013-1.110.180.050.23<0.01
13.34Cr40.360.290.690.0110.014-1.090.120.070.080.01
14.41Cr40.440.220.800.0300.023-1.040.170.040.26<0.01
15.41Cr40.410.250.710.0310.024-1.060.170.020.22<0.01
16.36Cr60.360.250.490.0210.020-1.540.160.030.21<0.01
17.25CrMo40.220.250.640.0100.011-0.970.160.230.33<0.01
18.34CrMo40.300.220.640.0110.012-1.010.190.240.11<0.01
19.42CrMo40.380.230.640.0190.013-0.990.170.160.08<0.01
20.50CrMo40.500.320.800.0170.022-1.040.170.240.11<0.01
21.50CrMo40.460.220.500.0150.014-1.000.260.210.22<0.01
22.27MnCrV40.240.211.060.0140.020-0.790.170.020.18<0.01
23.50CrV40.550.220.980.0170.013-1.020.07-0.010.11
24.50CrV40.470.350.820.0350.015-1.200.14-0.040.11
1 Higher carbon content relative to standard carbon content in steel.
Table 2. The input variables.
Table 2. The input variables.
Data No.VariableData No.Variable
1.Carbon (C, wt. %)6.Nickel (Ni, wt. %)
2.Silicon (Si, wt. %)7.Austenitizing temperature (Ta, °C)
3.Manganese (Mn, wt. %)8.Austenitizing time (ta, min.)
4.Chromium (Cr, wt. %)9.Cooling time to 500 °C (t500, s)
5.Molybdenum (Mo, wt. %)10.Specific Jominy distance (Ed, mm)
Table 3. Sample dataset for steel 42CrMo4.
Table 3. Sample dataset for steel 42CrMo4.
Data No.C
wt. %
Si
wt. %
Mn
wt. %
Cr
wt. %
Mo
wt. %
Ni
wt. %
Ta
°C
ta
min
t500
s
Ed
mm
HVtot
1.0.380.230.640.990.160.081050103.016735
2.0.380.230.640.990.160.081050108.816627
3.0.380.230.640.990.160.0810501051.016279
4.0.380.230.640.990.160.0810501098.016267
5.0.380.230.640.990.160.08105010296.416267
6.0.380.230.640.990.160.08105010611.716263
7.0.380.230.640.990.160.081050102010.816235
8.0.380.230.640.990.160.081050106961.416245
Table 4. Variables used for development of artificial neural networks.
Table 4. Variables used for development of artificial neural networks.
VariableConfiguration No. 1Configuration No. 2
InputCarbon (C, wt. %)+
Silicon (Si, wt. %)+
Manganese (Mn, wt. %)+
Chromium (Cr, wt. %)+
Molybdenum (Mo, wt. %)+
Nickel (Ni, wt. %)+
Austenitizing temperature (Ta, °C)++
Austenitizing time (ta, min.)++
Cooling time to 500 °C (t500, s)++
Specific Jominy distance (Ed, mm) +
OutputTotal hardness after continuous cooling (HVtot, HV)++
Table 5. Artificial neural network performance on learning dataset.
Table 5. Artificial neural network performance on learning dataset.
Configuration No.Hidden Layer Size HTraining No.rRMSErtestRMSEtest
1370.940650.91188
2630.918750.90393
Table 6. Performance of selected artificial neural networks on “unseen” data.
Table 6. Performance of selected artificial neural networks on “unseen” data.
Configuration No.runseenRMSEunseenMAPEunseen, %
10.9404610.7
20.8736714.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop