Abstract
This paper introduces an innovative approach that enables the automated and precise prediction of steel’s chemical composition based on the desired Jominy curve. The microstructure, and in fact the presence of martensite, is decisive for the hardness of the steel, so the study considered the occurrence of this phase at particular distances from the quenched end of the Jominy sample. Steels for quenching and tempering and case hardening were investigated. With the representative collected dataset of hardness values from the quenched end of the Jominy specimen, microstructure and chemical composition of steels, the complex regression model was made using supervised artificial neural networks. The balance between cost and required hardenability can be achieved through optimizing the chemical composition of steel. This model of designing steel with required hardenability can be of great benefit in the mechanical engineering and manufacturing industry. The model is verified experimentally.
1. Introduction
The selection of steel for specific machine parts, tools or constructions is preceded by the definition of the required properties. The required properties of steels, such as hardenability, are achieved by means of a specific chemical composition and manufacturing process. During the material production of machine parts and tools, it would be very beneficial for the required hardenability if the chemical composition of the steel was known.
Hardenability is defined as the ability of ferrous alloys to acquire hardness after austenitization and quenching. This includes the ability to reach the highest possible hardness and hardness distribution within a cross-section []. The maximum hardness of heat-treated steel primarily depends on the carbon content. Alloying elements have little effect on the maximum hardness, but they significantly affect the depth to which this maximum hardness can be developed. Thus, one of the first decisions to be made is what carbon content is necessary to obtain the desired hardness. The next step is to determine what alloy content will give the proper hardening response in the section size involved [].
Jominy and Boegehold developed the end-quench hardenability test that characterizes the hardenability of a steel from a single specimen in 1938 []. The Jominy test is still used worldwide and is part of many standards such as EN ISO 642: 1999, ASTM A255 and JIS G 0561 [,,]. The result of the Jominy test is the Jominy curve, which shows the measured hardness values at different distances from the quenched end. As mentioned before, the chemical composition of steel has a significant influence on hardenability, and thus numerous attempts have been made to characterize a steel’s hardenability from its chemical composition. An accurate model to calculate hardenability (Jominy curve) at an early stage of the steel production could result in controlling the hardenability of the final product. Grossmann characterized hardenability by defining the ideal critical diameter as the largest diameter of a cylindrical specimen which transforms into at least 50% of martensite when quenched with an infinitely large cooling rate at the surface []. In his later work, Grossmann proposed multiplying coefficients for alloying elements for the calculation of the critical diameter []. Numerous studies followed, trying to characterize a steel’s hardenability from its chemical composition [,,,,,,,,,].
Nevertheless, the relationship between the chemical composition of steel and the resulting values obtained after the Jominy end-quench test, and vice versa, cannot be defined precisely enough by any mathematical function. More complex regression analysis is necessary instead. Developments in computers and software have positioned artificial neural networks at the forefront in technical science in general, and materials science in particular [,,,,,,].
Deep learning is a powerful tool for finding patterns in multi-dimensional data. Deep learning, as a subset of machine learning, uses algorithms in a way where a computer can learn from an empirical dataset by modeling nonlinear relationships between the material properties and influencing factors []. Artificial neural networks (ANNs) are widely used in modeling steel and metal alloy issues due to their efficiency in handling regression tasks. ANNs are characterized by their ability to learn from labeled datasets and are, therefore, well suited for supervised learning applications. The creation of a representative dataset is crucial for the development of an effective model based on artificial neural networks [,,,].
Designing the chemical composition of the steel with the required properties is a crucial task from a manufacturing point of view [,,,,]. Knowing the required hardenability for machine parts or tools enables the design of a steel with an optimal chemical composition. Such a steel will have adequate hardenability, for specific application. The quenching and tempering of steels with insufficient hardenability does not provide appropriate hardness in deeper layers and can cause functional problems. On the other hand, excessive hardenability indicates the usage of a surplus of alloying elements and thus increases the cost.
The content of alloying elements should not be higher than necessary to ensure adequate hardenability.
In this paper, an innovative approach is introduced that enables the automated and precise prediction of a steel’s chemical composition based on the Jominy curve, respecting the microstructure at different distances from the quenched end of the Jominy specimen. In other words, the research question that we wanted to answer in this article is as follows: what chemical composition can steel have that will have the required Jominy curve? To determine the relationship between the Jominy curve and the chemical composition, artificial neural networks were used. Steels for quenching and tempering and case hardening were investigated. It is known that during the Jominy test, areas closer to the quenched-end are cooled faster. The result is that different microstructures are achieved at different distances from the quenched end of the Jominy specimen. Steels with different chemical compositions can achieve the same microstructure (e.g., 99.9% martensite), but due to their different chemical composition (primarily carbon content), hardness values will be different [,]. This implies that hardness values are the result of the microstructure as well as the chemical composition. Aiming at the minimization of possible errors in the input data regarding the microstructure, the presence of martensite was considered. The limit is set at 50% of martensite in the microstructure. Hardness is primarily dependent on carbon content and the values of hardnesses are known for different carbon contents and different ratios of martensite in the microstructure [,]. Some authors have successfully used the following relation to calculate hardness with 50% of martensite in the microstructure in other to achieve maximum hardness [,]:
HRC50%M = 0.73 ∙ HRCmax
This research included the supervised learning of artificial neural networks, and the dataset was structured in such a way that excellent relations were established between the predictors (hardnesses, measured on the Rockwell hardness scale C—(HRC), and microstructure, present at specific distances from the quenched end of the Jominy specimen) and responses (chemical compositions of seven alloying elements). Through collecting data, modeling and optimizing regression models, the optimal artificial neural network model for designing the chemical composition of steel is established.
2. Materials and Methods
2.1. Analysis of Dataset
In this research, steels for quenching and tempering and case hardening were investigated, from standards EN 10083-2, EN 10083-3, EN 10084 and similar [,,]. Datasets used for neural network training are obtained from our own research performed at the former Stalowa Wola Steelworks (Stalowa Wola, Poland) (about 80%), using the standards, steel producer’s catalogs, trade literature and Max Planck atlas for the heat treatment of steel. The dataset consisted of Jominy curves, presented as the consecutive hardness values at 13 distances from the quenched end of the Jominy specimen and the wt% content of alloying elements C, Si, Mn, Cr, Ni, Mo and Cu. It was assumed that the heat treatment of steel for the collected data was made under standard conditions that conform to the ASTM standard []. The grain size according to ASTM scale is 7. The dataset contains data of hardnesses (Rockwell hardness scale C—HRC) at distances of 1.5, 3, 5, 7, 9, 11, 13, 15, 20, 25, 30, 40 and 50 mm from the quenched-end of Jominy specimen (Table 1).
Table 1.
Ranges of predictor parameters (HRC) used to model chemical composition of steels.
The dataset also contains information on the percentage of mass concentration of the seven alloying elements, namely C, Mn, Si, Cr, Ni, Mo and Cu. The complete homogeneity of the steel is not realistic to expect. A small deviation in homogeneity can significantly affect the microstructure distribution along the Jominy specimen. The presence of martensite was considered. The limit is set at 50% of martensite in the structure. The ratio of hardness of steel with 50% martensite in the microstructure and 99% of martensite in the microstructure for different carbon content is shown in Figure 1 [].
Figure 1.
Ratio of hardness of steel with 50% martensite in the microstructure and 99% of martensite in the microstructure for different carbon content.
For calculating the hardness value with 50% martensite (HRC50%M), according to maximum hardness, HRCmax is used following relation [,]:
where k depends on carbon wt%. For the distances on Jominy specimen with values of hardness greater than k ∙ HRCmax, value 1 is assigned (more than 50% of martensite is presented in the microstructure). For the hardnesses lower than k ∙ HRCmax, value 0 is assigned (less than 50% of martensite is presented in the microstructure). The full dataset consists of 470 steels, so the explained procedure was performed on 470 steels with different Jominy curves and different chemical compositions. An example of the predictors for steel 42CrMo4 is shown in Table 2 and in Figure 2.
HRC50%M = k ∙ HRCmax
Table 2.
Predictors for one observation (steel 42CrMo4).
Figure 2.
Predictors of steel 42CrMo4 (hardnesses and microstructures).
The response for the same steel is shown in Table 3.
Table 3.
Responses for steel 42CrMo4 (chemical composition).
The dataset is divided into two different subsets: a training set, which is used to determine the model parameters, and a separate test dataset (validation set). The training set is used for model parameterization, while the validation was used as an independent dataset for evaluating the model’s performance. Data partitions are carried out in MATLAB R2023b software (MathWorks®, Natick, MA, USA), using the cvpartition function. A typical application of this function is holdout validation, where the dataset is partitioned into a training set and a test set. It generates a random, non-stratified partition for holdout validation on a dataset. The proportion of observations assigned to the test set is set to 5%, while the remaining data form the training set [].
The training dataset must be a representative sample of the data. The Kolmogorov–Smirnov (KS) test is used to evaluate the goodness of fit of the training and test dataset [,]. It checks whether the distribution of the training and test sets vary significantly. The results of the Kolmogorov–Smirnov test are presented in Table 4 and Table 5.
Table 4.
Kolmogorov–Smirnov test results of goodness of fit of the training and test dataset.
Table 5.
Kolmogorov–Smirnov test results of goodness of fit of the training and test dataset.
The Kolmogorov–Smirnov (KS) tests were carried out for each feature at a 5% significance level to assess the similarity of feature distributions between the training and test datasets. The KS test values were calculated for the features of hardness (HRC) and microstructure at specific distances from the quenched end of Jominy specimen. The obtained KS test values are between 0.09 and 0.167 for hardness and between 0 and 0.161 for microstructure. In addition, the p-values were calculated for each test, ranging from 0.57 to 0.99 (hardness) and 0.96 to 1 (microstructure), indicating no substantial evidence to reject the null hypothesis of similarity between the distributions. These results suggest that the model is likely to generalize well with the unseen test dataset (Figure 3).
Figure 3.
Training and test dataset’s split distribution.
2.2. Experimental Setup
The Machine Learning and Deep Learning Toolbox from MATLAB R2023b software (MathWorks®), was utilized for regression tasks. Neural network regression models are trained using the Regression Learner App, with a 10-fold cross-validation implemented during training to prevent overfitting [,]. The model development process, including both training and testing phases, is conducted separately for each of the seven alloying elements. Five different architectures of artificial neural networks are employed for training: ‘trilayered’, ‘bilayered’, ‘narrow’, ‘medium’, and ‘wide’ networks, in addition to an optimized neural network. The specific architectures of these neural networks are as follows:
- Trilayered [26-10-10-10-1],
- Bilayered [26-10-10-1],
- Narrow [26-10-1],
- Medium [26-25-1],
- Wide [26-100-1].
All neural networks have an input layer with 26 nodes and an output layer with one node. The hidden layers are distinct. The narrow, medium and wide neural network have one hidden layer with 10, 25 and 100 nodes respectively. The bilayered neural network has two hidden layers with 10 nodes, while the trilayered neural network has three hidden layers with 10 nodes. The Rectified Linear Unit (ReLU) activation function was used for all 5 different neural networks. The architecture and activation function of the optimized neural networks vary.
The model with the smallest test root mean square error (RMSE) is evaluated as the best model because low RMSE can ensure that the selected model generalizes well with a new dataset. In addition to RMSE, mean square error (MSE), mean absolute error (MAE) and coefficient of determination (R2) metrics are used to evaluate the model during the test performance analysis [].
3. Results
In order to select the best-performing model for prediction chemical composition of steel, this paragraph offers a thorough overview of model evaluation and selection, performance analysis and training. The training was conducted separately for each chemical element. Alloying elements were analyzed separately because the separate influence of each individual element is crucial and in most cases is sufficient to obtain the required chemical composition of steel. Thus, all the neural networks have input layer with 26 nodes (13 hardnesses at distances of 1.5, 3, 5, 7, 9, 11, 13, 15, 20, 25, 30, 40 and 50 mm from quenched-end of Jominy specimen and 13 values that indicate presence of martensite, higher or lower than 50%) and output layer with 1 node (chemical element).
3.1. The Effect of Individual Alloying Elements on the Hardenability of Steels
3.1.1. The Effect of the Carbon on the Hardenability of Steel
The artificial neural network models are placed in the table according to the lowest RMSE (Table 6).
Table 6.
Results after training for carbon with six different architectures of ANNs.
The narrow neural network model has the best results (with one hidden layer with 10 nodes). For that model, the predicted response vs. the experimental data for each observation (from the training dataset) is shown in Figure 4. For the same model, the predicted response vs. the experimental unseen data (the test dataset) is shown in Figure 5. In Figure 5, the relationship between residuals (difference between predicted response and experimental data) and true data is shown too. The best neural network model shows excellent results with very low RSME, as well as an extremely high R2 (close to 1).
Figure 4.
Performance evaluation of narrow neural network on training dataset (carbon).
Figure 5.
Performance evaluation of narrow neural network on test dataset (carbon).
3.1.2. The Effect of the Manganese on the Hardenability of Steel
The artificial neural network models for manganese are placed in Table 7 according to the lowest RMSE as well.
Table 7.
Results after training for manganese with six different architectures of ANNs.
Like the results for carbon, the narrow neural network model has the best results. The predicted response vs. the experimental data for each observation (from the training dataset) is shown in Figure 6. For the same model, the predicted response vs. the experimental unseen data (the test dataset) is shown in Figure 7. In Figure 7, the relationship between residuals and true data is shown too. The best neural network model shows good results with relatively low RSME as well as with satisfying R2.
Figure 6.
Performance evaluation of narrow neural network on training dataset (manganese).
Figure 7.
Performance evaluation of narrow neural network on test dataset (manganese).
3.1.3. The Effect of the Silicon on the Hardenability of Steel
The artificial neural network models for silicon are placed in Table 8 according to the lowest RMSE as well.
Table 8.
Results after training for silicon with six different architectures of ANNs.
According to the given criterion (lowest RMSE), the optimizable neural network is placed in the first row (Table 8). However, the predicted values of this model are concentrated around the mean value of the trained dataset. Due to this, the best model chosen is the narrow neural network model. For this model, the predicted response vs. the experimental data for each observation from the training dataset is shown in Figure 8. For the same model, the predicted response vs. the experimental unseen data is shown in Figure 9. In Figure 9, the relationship between residuals and true data is shown too.
Figure 8.
Performance evaluation of narrow neural network on training dataset (silicon).
Figure 9.
Performance evaluation of narrow neural network on test dataset (silicon).
The narrow neural network has very low RSME, so one can conclude that the model is very good. At the same time, the coefficient of determination R2 is negative. By this definition, the coefficient of determination can be negative, but it shows that the regression performed poorly, even worse when the regression model explains none of the variability of the response data around its mean [,]. Nevertheless, residuals in Figure 9 show very small figures. This means that predicted values are similar to the experimental data and that the predicted data for silicon are within the given steel standard.
3.1.4. The Effect of the Chromium on the Hardenability of Steel
The artificial neural network models for chromium are placed in Table 9 according to the lowest RMSE as well.
Table 9.
Results after training for chromium with six different architectures of ANNs.
According to the given criterion, the bilayered neural network model is chosen as the best (the neural network with two hidden layers with 10 nodes each). For this model, the predicted response vs. the experimental data for each observation from the training dataset is shown in Figure 10. For the same model, the predicted response vs. the experimental unseen data is shown in Figure 11. In Figure 11, the relation between residuals and true data is shown too.
Figure 10.
Performance evaluation of bilayered neural network on training dataset (chromium).
Figure 11.
Performance evaluation of bilayered narrow neural network on test dataset (chromium).
The bilayered narrow neural network has a relatively low RSME and a very good coefficient of determination R2 (close to 0.9).
3.1.5. The Effect of the Nickel on the Hardenability of Steel
The artificial neural network models for nickel placed in Table 10 according to the lowest RMSE.
Table 10.
Results after training for nickel with six different architectures of ANNs.
According to the given criterion, the trilayered neural network model is chosen as the best (neural network with three hidden layers with 10 nodes each). For this model, the predicted response vs. the experimental data for each observation from the training dataset is shown in Figure 12. For the same model, the predicted response vs. the experimental unseen data is shown in Figure 13. In Figure 13, the relationship between residuals and true data is shown too.
Figure 12.
Performance evaluation of trilayered neural network on training dataset (nickel).
Figure 13.
Performance evaluation of trilayered narrow neural network on test dataset (nickel).
The trilayered narrow neural network has a relatively low RSME and a very good coefficient of determination R2 (close to 0.9). For most of the dataset, residuals are lower than 0.2 with one relatively high exception (0.6).
3.1.6. The Effect of the Molybdenum on the Hardenability of Steel
The artificial neural network models for molybdenum are placed in Table 11 according to the lowest RMSE as well.
Table 11.
Results after training for molybdenum with six different architectures of ANNs.
According to the given criterion, the optimizable neural network model is chosen as the best (neural network with two hidden layers, first with 295 nodes and second with 5 nodes). For this model, the predicted response vs. the experimental data for each observation from the training dataset is shown in Figure 14. For the same model, the predicted response vs. the experimental unseen data is shown in Figure 15. In Figure 15, the relationship between residuals and true data is shown too.
Figure 14.
Performance evaluation of optimized neural network on training dataset (molybdenum).
Figure 15.
Performance evaluation of optimized narrow neural network on test dataset (molybdenum).
The optimizable narrow neural network has a very low RSME and a weak coefficient of determination R2. For most of the test dataset, the residuals are lower than 0.1.
3.1.7. The Effect of the Copper on the Hardenability of Steel
The artificial neural network models for copper are placed in Table 12 according to the lowest RMSE as well.
Table 12.
Results after training for copper with six different architectures of ANNs.
According to the given criterion (lowest RMSE), the optimizable neural network is placed in first row (one hidden layer with 260 nodes). For this model, the predicted response vs. the experimental data for each observation from the training dataset is shown in Figure 16. For the same model, the predicted response vs. the experimental unseen data is shown in Figure 17. In Figure 17, the relationship between residuals and true data is shown too.
Figure 16.
Performance evaluation of narrow neural network on training dataset (copper).
Figure 17.
Performance evaluation of optimized narrow neural network on test dataset (copper).
The optimizable neural network has a very low RSME, while the coefficient of determination is R2 negative, which shows that regression performed poorly. Nevertheless, residuals in Figure 17 show very small figures (lower than 0.1). This means that the predicted values are similar to the experimental data.
3.1.8. A Comparison between the Experimental and Predicted Data
Predicted and experimental data for five different steels are given in Table 13. Steel C60E is chosen from the EN 10083-2:2006 standard (steels for quenching and tempering) as a non-alloy steel. Alloy steels 41Cr4 and 46Cr2 are steel grades chosen from the standard EN 10083-3:2006 (steels for quenching and tempering). Steel 17CrNi6-6 is the steel grade chosen from standard EN 10084:2008 (case-hardening steels). Steel 65Mn4 is chosen as a steel grade which is not part of any standard. This steel shows relatively high values of hardness close to the quenched-end of the Jominy specimen. The hardness declines after distance of only 7 mm from the quenched-end of the Jominy specimen, which can be characterized as low hardenability.
Table 13.
Predicted vs. experimental data for content of chemical elements.
For each of these steels, according to the steel grades, the minimum and maximum values for defined chemical elements are given too. The predicted values for chemical elements, for all steels, are within the limits defined by the steel grades.
4. Discussion
The results for the investigated alloying elements can be divided into three groups. The first group includes carbon, manganese and chromium. These elements are mandatory for almost all steel grades within the standards EN 10083-2, EN 10083-3 and EN 10084 [,,]. The results for these elements show excellent values for the parameters root mean square error (RMSE) and the coefficient of determination (R2). Residual tests show very small differences between predicted and experimental values. Carbon is commonly recognized as an alloying element that has the biggest impact on hardenability. Our presented results proved this opinion. An excellent coefficient of determination (R2) greater than 99% is likewise possessed by three neural networks that have the lowest RMSE metric (lower than 0.011). This demonstrates very strong correlation between the Jominy test hardness values and the wt.% of carbon. Even with a rather simple neural network, such as narrow neural network which has only one hidden layer, a high correlation was obtained. For chromium, slightly weaker correlation values were obtained. The coefficient of determination is 90%, but concurrently the RMSE metric has relatively low values which also show a strong correlation between the Jominy test hardness values and the wt.% of carbon. The RMSE metric for manganese shows similar values as for chromium, while the coefficient of determination is slightly lower (82%). The results for both metrics, the RMSE and R2, show that carbon, manganese and chromium have a significant impact on the hardenability of steel.
The second group includes nickel and molybdenum. The root mean square error for molybdenum is very small and residual tests for this element show values lower than 0.1%. The root mean square error for nickel is also small, while the coefficient of determination is 90%. A deep neural network (trilayered) with three hidden layers, each with ten nodes, was used to achieve these results. Nickel and molybdenum are mandatory alloying elements for some steel grades within the researched standards. More alloying steels containing nickel and molybdenum should be incorporated into the models in order to improve them.
The third group includes silicon and copper. For all steel grades, silicon is defined with a maximum of 0.4%, while copper is not mandatory for any steel grade within the investigated standards [,,]. The RMSE for both elements shows very good results. On the other hand, the coefficient of determination is below zero for silicon and close to 0 for copper. Silicon is not a carbide former and has a small influence on hardenability. In low concentrations, silicon is primarily used as a potent deoxidizer []. Only a significant increase in mass content up to 0.75% enhances hardenability [,]. The investigated steels have less than 0.35% of copper. Such a small amount of copper has little or no effect on hardenability.
5. Conclusions
This paper presents models of neural networks for designing the chemical composition of steels for heat treatment. The neural networks models were developed based on 470 data that included the consecutive hardness values at 13 distances from the quenched end of the Jominy specimen and the wt.% content of alloying elements C, Si, Mn, Cr, Ni, Mo and Cu. The complex regression analysis included the presence of 50% of martensite in the microstructure.
Acceptable results are obtained from the created neural network models for the design of the chemical composition of steel. The obtained predicted results are in accordance with the experimental data of three representative steels (C60E, 41Cr4 and 17CrNi6-6) from three different standards (EN 10083-2, EN 10083-3 and EN 10084) (Table 13). Customers are often forced to choose a steel of a specific steel grade that can provide such hardenability. Often, that steel grade can have an excessive hardenability, which unnecessarily leads to a greater use of alloying elements. It is therefore important that the chemical composition should be designed according to the customer’s requirements to obtain the required hardenability and relatively low production costs.
The developed artificial neural network models can be successfully used to predict the chemical composition of steels for heat treatment according to the known curvature of the Jominy curve. The predicted values for all seven chemical elements are close to the experimental values and within the limits defined by the steel grades. To design any artificial neural network, a relatively large number of experimental data are required. With additional experimental data, for nickel and molybdenum steels especially, used for learning and testing neural networks, the proposed models can be further improved.
Author Contributions
Conceptualization, N.T. and W.S.; methodology, N.T. and W.F.G.; software, N.T., D.I. and W.F.G.; validation, W.S. and D.I.; formal analysis, N.T.; investigation, W.S. and N.T.; resources, D.I.; data curation, N.T. and W.S.; writing—original draft preparation, N.T.; writing—review and editing, W.S.; visualization, N.T.; supervision, W.S.; project administration, W.S.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy reason.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Liščić, B. Hardenability. In Steel Heat Treatment Handbook, 2nd ed.; Totten, G.E., Ed.; CRC Press: Boca Raton, FL, USA, 2007; pp. 213–276. [Google Scholar]
- Mesquita, R.A.; Schneider, R.E. Introduction to Heat Treating of Tool Steels. In ASM Handbook Volume 4D: Heat Treating of Irons and Steels; Dossett, J.L., Totten, G.E., Eds.; ASM International: Cleveland, OH, USA, 2014; pp. 277–287. [Google Scholar] [CrossRef]
- Jominy, W.E.; Boegehold, A.L. A hardenability test for carburizing steel. Trans. ASM 1938, 26, 574–606. [Google Scholar]
- ISO 642:1999; Steel—Hardenability Test by End Quenching (Jominy Test). International Organization for Standardization: Geneva, Switzerland, 1999.
- ASTM A255-10; Standard Test Methods for Determining Hardenability of Steel. ASTM International: West Conshohocken, PA, USA, 2010.
- JIS G 0561; Method of Hardenability Test for Steel (End Quenching Method). Japanese Standards Association: Tokyo, Japan, 2006.
- Grossmann, M.A.; Asimov, M.; Urban, S.F. Hardenability, its relation to quenching and some quantitative data. In Hardenability of Alloy Steels; ASM: Cleveland, OH, USA, 1939; pp. 124–196. [Google Scholar]
- Grossmann, M.A. Hardenability Calculated from Chemical Composition. AIME Trans. 1942, 155, 227–255. [Google Scholar]
- Crafts, W.; Lamont, J.L. Hardenability and Steel Selection; Pitman Publishing Corporation: Toronto, ON, Canada, 1949; pp. 147–175. [Google Scholar]
- Kramer, I.R.; Hafner, R.H.; Toleman, S.L. Effect of Sixteen Alloying Elements on Hardenability of Steel. Trans. AIME 1944, 158, 138–158. [Google Scholar]
- Comstock, G.F. The influence of titanium on the hardenability of steel. AIME Trans. 1945, 1, 148–150. [Google Scholar]
- Hodge, J.M.; Orehoski, M.A. Relationship between the hardenability and percentage of carbon in some low alloy steels. AIME Trans. 1946, 167, 627–642. [Google Scholar]
- Brown, G.T.; James, B.A. The accurate measurement, calculation, and control of steel hardenability. Metall. Trans. 1973, 4, 2245–2256. [Google Scholar] [CrossRef]
- Doane, D.V. Application of Hardenability Concepts in Heat Treatment of Steels. J. Heat Treat. 1979, 1, 5–30. [Google Scholar] [CrossRef]
- Mangonon, P.L. Relative hardenabilities and interaction effects of Mo and V in 4330 alloy steel. Metall. Trans. A 1982, 13, 319–320. [Google Scholar] [CrossRef]
- Tartaglia, J.M.; Eldis, G.T. Core Hardenability Calculations for Carburizing Steels. Metall. Trans. A 1984, 15, 1173–1183. [Google Scholar] [CrossRef]
- Kasuya, T.; Yurioka, N. Carbon Equivalent and Multiplying Factor for Hardenability of Steel. In Proceedings of the 72nd Annual AWS Meeting, Detroit, MI, USA, 15–19 April 1991. [Google Scholar] [CrossRef]
- Yamada, M.; Yan, L.; Takaku, R.; Ohsaki, S.; Miki, K.; Kajikawa, K.; Azuma, T. Effects of Alloying Elements on the Hardenability, Toughness and the Resistance of Stress Corrosion Cracking in 1 to 3 mass % Cr Low Alloy Steel. ISIJ Int. 2014, 54, 240–247. [Google Scholar] [CrossRef]
- Filetin, T.; Majetić, D.; Žmak, I. Prediction the Jominy curves by means of neural networks. In Proceedings of the 11th IFHTSE Congress, Florence, Italy, 19–21 October 1998. [Google Scholar]
- Dobrzanski, L.A.; Sitek, W. Application of a neural network in modelling of hardenability of constructional steels. J. Mater. Process. Technol. 1998, 78, 59–66. [Google Scholar] [CrossRef]
- Bhadeshia, H.K.D.H. Neural networks in materials science. ISIJ Int. 1999, 39, 966–979. [Google Scholar] [CrossRef]
- Filetin, T.; Majetić, D.; Žmak, I. Application of Neural Networks in Predicting the Steel Properties. In Proceedings of the 10th International DAAAM Symposium, Vienna, Austria, 21–23 October 1999. [Google Scholar]
- Sitek, W.; Dobrzanski, L.A.; Zacłona, J. The modelling of high-speed steels’ properties using neural networks. J. Mater. Process. Technol. 2004, 157–158, 245–249. [Google Scholar] [CrossRef]
- Sitek, W. Methodology of High-Speed Steels Design Using the Artificial Intelligence Tools. J. Achiev. Mater. Manuf. Eng. 2010, 39, 115–160. [Google Scholar]
- Bishop, C.M.; Bishop, H. The Deep Learning Revolution. In Deep Learning: Foundations and Concepts; Springer International Publishing: Berlin/Heidelberg, Germany, 2024; pp. 1–22. [Google Scholar]
- Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
- Sitek, W.; Trzaska, J. Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering. Metals 2021, 11, 1832. [Google Scholar] [CrossRef]
- Taherdoost, H. Deep Learning and Neural Networks: Decision-Making Implications. Symmetry 2023, 15, 1723. [Google Scholar] [CrossRef]
- Park, S.; Kim, C.; Kang, N. Artificial Neural Network-Based Modelling for Yield Strength Prediction of Austenitic Stainless-Steel Welds. Appl. Sci. 2024, 14, 4224. [Google Scholar] [CrossRef]
- Fu, Z.; Liu, W.; Huang, C.; Mei, T. A Review of Performance Prediction Based on Machine Learning in Materials Science. Nanomaterials 2022, 12, 2957. [Google Scholar] [CrossRef]
- Yan, F.; Song, K.; Gao, L.; Xuejun, W. DCLF: A divide-and-conquer learning framework for the predictions of steel hardness using multiple alloy datasets. Mater. Today Commun. 2022, 30, 103195. [Google Scholar] [CrossRef]
- Geng, X.; Cheng, Z.; Wang, S.; Peng, C.; Ullah, A.; Wang, H.; Wu, G. A data-driven machine learning approach to predict the hardenability curve of boron steels and assist alloy design. J. Mater. Sci. 2022, 57, 10755–10768. [Google Scholar] [CrossRef]
- Badini, S.; Regondi, S.; Pugliese, R. Unleashing the Power of Artificial Intelligence in Materials Design. Materials 2023, 16, 5927. [Google Scholar] [CrossRef] [PubMed]
- Guo, K.; Yang, Z.; Yu, C.-H.; Buehler, M.J. Artificial intelligence and machine learning in design of mechanical materials. Mater. Horiz. 2021, 8, 1153–1172. [Google Scholar] [PubMed]
- Bhandari, U.; Chen, Y.H.; Ding, H.; Zeng, C.Y.; Emanet, S.; Gradl, P.R.; Guo, S.M. Machine-Learning-Based Thermal Conductivity Prediction for Additively Manufactured Alloys. J. Manuf. Mater. Process. 2023, 7, 160. [Google Scholar] [CrossRef]
- Hodge, J.M.; Orehoski, M.A. Hardenability Effects in Relation to the Percentage of Martensite. AIME Trans. 1946, 167, 502–512. [Google Scholar]
- Canale, L.C.F.; Albano, L.L. Hardenability of Steel. In Comprehensive Materials Processing; Hashmi, S., Batalha, G.F., Eds.; Elsevier: Amsterdam, The Netherlands, 2014; Volume 12, pp. 39–97. [Google Scholar] [CrossRef]
- Totten, G.E.; Bates, C.E. Handbook of Quenchants and Quenching Technology; ASM International: Materials Park, OH, USA, 1993; pp. 35–68. [Google Scholar]
- Smoljan, B.; Iljkić, D.; Tomašić, N. Mathematical modelling of Hardness of Quenched and Tempered Steel. Arch. Mater. Sci. Eng. 2015, 74, 85–93. [Google Scholar]
- Liščić, B. Steel Heat Treatment. In Steel Heat Treatment Handbook, 2nd ed.; Totten, G.E., Ed.; CRC Press: Boca Raton, FL, USA, 2007; pp. 277–414. [Google Scholar]
- EN 10083-2:2006; Steels for Quenching and Tempering—Part 2: Technical Delivery Conditions for Non Alloy Steels. German Institute for Standardization: Berlin, Germany, 2006.
- EN 10083-3:2007; Steels for Quenching and Tempering—Part 3: Technical Delivery Conditions for Alloy Steels. German Institute for Standardization: Berlin, Germany, 2007.
- EN 10084:2008; European Committee for Standardization; Case Hardening Steels: Technical Delivery Conditions. German Institute for Standardization: Berlin, Germany, 2008.
- Gemechu, W.F.; Sitek, W.; Batalha, G.F. Improving Hardenability Modeling: A Bayesian Optimization Approach to Tuning Hyperparameters for Neural Network Regression. Appl. Sci. 2024, 14, 2554. [Google Scholar] [CrossRef]
- Massey, F.J. The Kolmogorov-Smirnov Test for Goodness of Fit. J. Am. Stat. Assoc. 1951, 46, 68–78. [Google Scholar] [CrossRef]
- Avdović, A.; Jevremović, V. Quantile-Zone Based Approach to Normality Testing. Mathematics 2022, 10, 1828. [Google Scholar] [CrossRef]
- Xiong, Z.; Cui, Y.; Liu, Z.; Zhao, Y.; Hu, M.; Hu, J. Evaluating explorative prediction power of machine learning algorithms for materials discovery using k-fold forward cross-validation. Comput. Mater. Sci. 2020, 171, 109203. [Google Scholar] [CrossRef]
- Trzaska, J.; Sitek, W. A Hybrid Method for Calculating the Chemical Composition of Steel with the Required Hardness after Cooling from the Austenitizing Temperature. Materials 2024, 17, 97. [Google Scholar] [CrossRef] [PubMed]
- Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef] [PubMed]
- Zemlyak, S.; Gusarova, O.; Khromenkova, G. Tools for Correlation and Regression Analyses in Estimating a Functional Relationship of Digitalization Factors. Mathematics 2022, 10, 429. [Google Scholar] [CrossRef]
- Wang, Z.; Qi, J.; Liu, Y. Effect of Silicon Content on the Hardenability and Mechanical Properties of Link-Chain Steel. J. Mater. Eng. Perform. 2019, 28, 1678–1684. [Google Scholar] [CrossRef]
- Salvetr, P.; Gokhman, A.; Nový, Z.; Motyčka, P.; Kotous, J. Effect of 1.5 wt% Copper Addition and Various Contents of Silicon on Mechanical Properties of 1.7102 Medium Carbon Steel. Materials 2021, 14, 5244. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).