Next Article in Journal
A Method for Comparing the Fatigue Performance of Forged AZ80 Magnesium
Next Article in Special Issue
Application of Machine Learning to Bending Processes and Material Identification
Previous Article in Journal
Particle Size Distribution Models for Metallurgical Coke Grinding Products
Previous Article in Special Issue
Machine Learning-Based Models for the Estimation of the Energy Consumption in Metal Forming Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Topological Optimization of Artificial Neural Networks to Estimate Mechanical Properties in Metal Forming Using Machine Learning

by
David Merayo
*,
Alvaro Rodríguez-Prieto
and
Ana María Camacho
Department of Manufacturing Engineering, UNED, Juan del Rosal 12, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Metals 2021, 11(8), 1289; https://doi.org/10.3390/met11081289
Submission received: 24 July 2021 / Revised: 9 August 2021 / Accepted: 13 August 2021 / Published: 16 August 2021

Abstract

:
The ability of a metal to be subjected to forming processes depends mainly on its plastic behavior and, thus, the mechanical properties belonging to this region of the stress–strain curve. Forming techniques are among the most widespread metalworking procedures in manufacturing, and aluminum alloys are of great interest in fields as diverse as the aerospace sector or the food industry. A precise characterization of the mechanical properties is crucial to estimate the forming capability of equipment, but also for a robust numerical modeling of metal forming processes. Characterizing a material is a very relevant task in which large amounts of resources are invested, and this paper studies how to optimize a multilayer neural network to be able to make, through machine learning, precise and accurate predictions about the mechanical properties of wrought aluminum alloys. This study focuses on the determination of the ultimate tensile strength, closely related to the strain hardening of a material; more precisely, a methodology is developed that, by randomly partitioning the input dataset, performs training and prediction cycles that allow estimating the average performance of each fully-connected topology. In this way, trends are found in the behavior of the networks, and it is established that, for networks with at least 150 perceptrons in their hidden layers, the average predictive error stabilizes below 4 % . Beyond this point, no really significant improvements are found, although there is an increase in computational requirements.

1. Introduction

Aluminum alloys are among the most widely used materials in the industry, and, although their use is still far from being as widespread as that of steel, they have many advantages that make them a very interesting material whose use is growing regularly [1]. There is a huge number of aluminum alloys, but few of them are typically used in the industrial field [2], sometimes because it is difficult to find new solutions and, sometimes, because they are special materials with optimized properties to fulfill their requirements, according to their application [3].
Aluminum alloys are manufactured by different techniques [4]. Among its many properties of industrial interest, it can be noted that it is a material with a high formability, so it is especially suitable for metal forming processes. Moreover, a precise characterization of the mechanical properties is crucial to estimate the forming capability of equipment but also for a robust numerical modeling of metal forming processes. Among all the mechanical properties, the ultimate tensile strength ( U T S ) plays a key role in the definition of the onset of the plastic instability by tensile tension [3]. This mechanical property is closely related to the strain hardening of the metal and, therefore, to its forming capacity under metal forming processes. In uniaxial tension, the plastic deformation is limited by the value of U T S since, once it is exceeded, the onset of the plastic instability takes place, and from then on, the behavior of the material is almost unpredictable until fracture [5].
Therefore, it is well known that aluminum alloys present a high ductility [6] because they can withstand a lot of plastic deformation before fracture and, so, they are widely used in metal forming operations [7]; such metals can be manufactured by metal sheet operations and bulk forming processes, such as forging, rolling, extrusion and/or drawing [2]. Ductile materials are able to absorb a great amount of energy before failing, otherwise known as toughness [8].
The design and production of some industrial components rely on the knowledge of the mechanical properties obtained and tensile testing [9]. The U T S may serve to determine the beginning of the plastic instability and provides an insight into the initiation of the fracture or the necking [10]. It also can be used as an input to estimate the forming force in the conventional forming procedures such as stretch-bending or deep-drawing [10].
Metal forming is frequently employed to manufacture components. The microstructure and the mechanical properties of these parts are modified by these processes [2]. For example, the increase in the ultimate stress and hardness observed during the A-6063 extrusion is attributed to the grain size reduction and the temperature increase [11].
In addition to U T S , the strain hardening exponent, the yield strength ( Y S ), the process-induced residual stresses, and the hardness are also important mechanical characteristics [1,3,10]. These properties offer an idea of the in-service behavior of the formed part. Furthermore, their correlations help to understand the response of the component [12], i.e., knowing the difference between U T S and Y S can help the designer to predict how much additional stress a component can withstand before failure [5] because Y S defines the onset of plastic deformation, and U T S defines the onset of plastic instability.
Knowing the expected behavior of the materials used in industrial designs is critical; however, obtaining these data frequently requires accessing large amounts of resources, which are normally not accessible [1]. Many tests are required to obtain relevant information, which entail having enough time, personnel and facilities be available [13]. Characterizing a metal comprises many tests that require non-trivial quantities of resources [8].
Although it is a relatively new technology and not as widespread in materials science and manufacturing as in other areas [13], artificial intelligence (AI) and machine learning (ML) techniques have been successfully used to make predictions about the metallurgical properties of some materials [1,3,14,15,16].
Over recent years, AI and ML have received much attention in the field of materials modeling, due to their outstanding ability to analyze a huge amount of data and expose correlations between complex interrelated properties [17]. ML is, perhaps, the most relevant branch of AI, and is the science of making computers learn and act like humans without being explicitly programmed [18,19]. It is often used to discover hidden patterns in complex systems through a training process in which a great amount of noisy data is furnished as input [20,21]. ML can be classified into supervised learning (the machine learns from labeled data) and unsupervised learning (the machine finds patterns in the data without any external help) [20,22,23].
Among the most challenging topics in this field is the search for the best representation of input variables in ML models, which is commonly called feature engineering and comprises a set of activities, such as feature extraction, feature construction and feature selection [23,24]. Feature engineering research is crucial to the application of the ML.
Within current materials science, the scale and speed of data acquisition, the accuracy of the data and the volatility of the data are additional challenges for researchers [3]. It raises the question of how to use and analyze these data in a useful way that supports the decisions of developers and designers [18,23]. Material data tend to be wide in scope and, often, shallow in depth. Here, depth should be understood as the number of observations of the state of a system. The lack of observations is due not only to the cost and difficulty of acquiring data (especially through experimentation), but also to the nature of the data itself. However, fully employing the data is a key part of advanced design systems [23,25].
In recent years, an incipient trend in materials science research is the combination of existing experimental and numerical modeling methodologies with AI techniques [26,27,28]. In general, materials science advances thanks to accumulated experience and already established rules [29,30]. New advances in numerical modeling facilitate the methodical acquisition of large amounts of data, while complicating analysis, hypothesis formulation and pattern prediction. The rise of AI techniques makes up for this deficiency to a great extent [31].
Multilayer artificial neural networks (ANN) can be considered the most remarkable methodology of those that are included into the field of AI because they have demonstrated their capabilities in almost all branches of knowledge and because they are currently receiving a lot of attention from investigators [32]. A multilayer network is able to learn a function by training on a labeled dataset that can be used to perform regressions [33]. ANN are made up of perceptrons (neurons) that regroup forming layers (clusters of neurons) that communicate with each other (in general, perceptrons do not communicate with their own layer companions) [13]. For a fully connected multilayer neural network, the time complexity of the backpropagation training is given by Equation (1). So, it is highly recommended to minimize the number of hidden nodes to reduce the training time [1].
O n · m · o · N · i = 1 k h i ,
where n is the size of the training dataset, m is the number of features, o is the number of output perceptrons, N is the number of iterations and k is the number of hidden layers (each of them containing h i nodes).
The main objective of this work is to develop a working methodology that allows optimizing the topology of a multilayer neural network in such a way that it is capable of making predictions about the U T S of wrought aluminum alloys [34], maximizing the precision and accuracy of the estimation and minimizing the computational resources [20]. Although this paper only takes into account the U T S , this same approach could be applied to other properties that have already been mentioned, such as the Y S or the elongation at break (A).

2. Methodology

This work is developed following a three-stage scheme, and the data generated in each of them are used as input for the subsequent one [13]. This workflow guarantees that the data that reach each phase are correctly prepared and processed, and are ready to be employed. Therefore, the information resulting from the entire process is a consequence of the initial dataset.
Figure 1 schematically shows the three stages that compose the methodology of this work: in the first stage, an initial input dataset is created; in the second stage, the optimization process is carried out through training-prediction cycles; and in the third phase, all available information is analyzed.

2.1. Input Dataset Preparation

All initial data on the properties of the materials were obtained from Matmatch GmbH (Munich, Germany) [35]. It is an online library that contains freely accessible specification sheets about material properties [13]. These include a large number of aluminum alloys with very heterogeneous information. The volume of data initially obtained comprises more than 5000 materials and more than 350 properties [1,3].
After obtaining these data, each record must be read and interpreted in an automated way. Each specification sheet contains much more information than is used during this study, so it is necessary to discard the irrelevant data [35]. The following considerations are taken into account to select the records that are found to be useful:
  • Only datasheets that contain the value of the ultimate tensile stress ( U T S ) at 20 °C are considered.
  • Only alloys whose chemical composition is defined at more than 95 % are taken into account [13] (note that some datasheets are poorly defined).
  • Only 9 chemical elements are considered to define the chemical composition of the alloys [29]: Al, Zn, Cu, Si, Fe, Mn, Mg, Ti and Cr. The mass contribution of all other chemical elements is regrouped as ”Other”.
  • Only wrought alloys are considered in this study.
  • Only the specification sheets that include the temper of the alloy are considered. This study only considers the following tempers: F (as fabricated), O (annealed), H (strain hardening) and T (thermally treated) [7].
After taking all these considerations into account, only 2671 materials (the obviated records do not meet the aforementioned conditions), 11 input properties and the U T S are considered. These alloys constitute the initial dataset on which the entire study is developed. One of these properties is categorical (temper) and must be mapped as an integer, while the other 11 are numerical ( U T S and chemical composition) and must be normalized to avoid bias [20,22]. Normalization is carried out using Equation (2).
x i ˜ = x i x m i n x M A X x m i n
where x i is each of the non-normalized input values, x i ˜ is the related normalized value [ 0 1 ] , x m i n is the minimum value for that parameter and x M A X is the maximum value.

2.2. Network Optimization by Training-Prediction

The ANN topology denotes the way in which perceptrons are associated and is an essential characteristic in the performance of the network [19]. Layers are shapeless in the sense that all of its nodes are equally relevant, connected the same way, and lack differentiators [22]. Only the initialization step and the following training make its importance change [25].
In this study, the neural network is defined as a fully connected multilayer feedforward topology [33], which comprises an input layer, two hidden layers and an output layer. In this topology, all the perceptrons in each layer are only connected to all ones in the next layer so that the information only flows in one direction, from the input layer to the output layer [22].
The neural network receives an input vector of 11 elements (chemical composition and temper) and returns a prediction about the value of the expected U T S [3]. Therefore, the input layer is made up of 11 nodes, and the output layer only contains a single node. Additionally, the network topology contains two hidden layers whose number of nodes is to be optimized [19]. Figure 2 shows a schematic representation of the network in which the hidden layers are represented as squares.
Different topologies are tested to carry out the network optimization: the number of nodes in both hidden layer changes in increments of 10 nodes (from 10 to 200). In this way, 400 topologies are obtained with a number of nodes that varies between 32 and 412. For each of these networks, 10 independent training and prediction iterations are carried out [1,3].
Each iteration consists of four phases:
  • Dataset shuffling and splitting to create a training subset ( 80 % of records, so 2137) and a testing subset (remaining 20 % , so 534).
  • ANN training, using the data contained in the training subset.
  • Prediction of the properties for the records in the testing subset.
  • Results and data storage for further analysis.
The training is configured as follow [1]:
  • Calculation of the learning rate for each parameter using adaptive moment estimation (ADAM) with β 1 = 0.9 , β 2 = 0.999 (algorithm parameters), η = 0.001 (step size) and ϵ = 10 8 (stability factor) [36].
  • Early stopping after 20 iterations without significant changes.
  • Training stops when a training error of less than 0.1 is reached.
  • Maximum of 100,000 training epochs to avoid infinite loops.

2.3. Data Analysis

The optimization process generates a large amount of data that must be processed to generate information that allows to draw conclusions about the network performance [3,13]. For each of the considered 400 topologies, the predictive performance of the 10 iterations is calculated and stored.
With the information obtained about the tests on these topologies, it is possible to build a performance map of 20 × 20 cells; each of these cells represents a topology described by the number of nodes in the two hidden layers. In each position of this data structure, it is possible to store statistical information that allows making comparisons between the different topologies.
Network topologies with the lowest average predictive error (highest accuracy), lowest standard deviation of predictive error (highest precision), and shortest training time (lowest resource usage) are preferred [20].

3. Results and Discussion

Table 1 contains some statistical metrics about the information contained in the input dataset. It is interesting to highlight the wide range of values associated with the U T S .
As expected, more complex topologies tend to have higher precision than those with fewer perceptrons and, in fact, the lowest error is related to a network with 160 and 200 nodes in its two hidden layers ( 2.88 % ). Moreover, the highest error rate ( 95.28 % ) occurs for the simplest network of those that are considered (10 and 10 nodes). More detailed information can be found on Table A1.
Figure 3 graphically shows the average predictive error (values above 20 % are trimmed to avoid scale-related issues). A region with a very low precision (error greater than 10 % ) can be seen for topologies with less than 150 perceptrons, while for more complex networks, the error remains lower (less than 10 % ). It is interesting to note that, as can be seen in the three-dimensional figure, the transition between both regions is quite abrupt, and a step (yellow zone) is formed. This transition zone (yellow) separates an almost flat area from a steep one.
This transition can be interpreted as a frontier before which the predictions cannot be trusted because the error is excessive and the model is very unstable (small changes in the network produce large differences).
Figure 4 shows the value of the mean predictive error as a function of the number of nodes in the hidden layers. It can be seen that the error asymptotically tends to a value close to 2 % . Note that the number of nodes in the hidden layers is the sum of the number of perceptrons in both layers.
It is interesting to highlight that, for neural networks with more than 300 nodes, the average predictive error remains, in all cases, approximately stable at around 4 % . It is a very interesting result since it establishes a boundary beyond which no significant improvement can be seen, although there is an increase in computational requirements.
In view of these results, it is clear that complex topologies should be privileged over simpler ones; however, it should also be considered that, as the number of nodes increases, achieving significant improvements becomes very expensive in computational terms and, in fact, it is found that a more complex topology does not always guarantee better accuracy.
On the other hand, it is not only necessary to take into account the accuracy (related to the average predictive error) of the results, but also the precision (related to the predictive error standard deviation).
The standard deviation of the error gives an idea of the repeatability of the estimates, and, together with the average error, allow to identify the confidence range in which a prediction is. In a similar way to what happened in the case of the average error, the more complex networks are more precise (the minimum is reached for a network with 180 and 160 perceptrons, respectively), whereas the simpler ones obtain a higher standard deviation value (the maximum is reached for a network with 30 and 10 nodes, respectively). More detailed information can be found on Table A2.
Figure 5 graphically shows the standard deviation of the predictive error (values above 10 % are trimmed to avoid scale-related issues). The distribution of values is much more irregular than in the case of the average error. However, three areas can be seen: for networks with less than 150 perceptrons, the standard deviation is high (mostly above 10 % ); for topologies with between 150 and 250 nodes, a very irregular transition zone is produced with scattered high values; and, for the more complex networks (more than 250 nodes), the values are mostly kept below 5 % .
As can be seen by comparing Figure 3 and Figure 5, although the standard deviation is distributed in a much more irregular way, the trends of both statistical metrics are similar. For networks whose hidden layers contain 150 or more perceptrons, the accuracy and precision stabilize, and there are hardly any significant differences in the performance of these topologies.

4. Conclusions and Future Work

This paper studies how to optimize the topology of a multilayer artificial neural network to carry out predictions about mechanical properties of aluminum alloys, such as U T S , using machine learning. It is a contribution of great industrial interest since it allows exploring how to obtain sufficiently precise estimates with minimal computational cost and, therefore, using fewer resources. Therefore, the main conclusions of this work are presented as follows:
  • An artificial neural network with two hidden layers can predict the U T S of wrought aluminum alloys by taking its chemical composition and temper as input. The accuracy of this prediction stabilizes below 4 % and even reaches 2.88 % in this study.
  • The predictive ability of an ANN with two hidden layers to estimate the U T S of aluminum alloys is stabilized for topologies that include 150 or more perceptrons in both hidden layers. The precision and accuracy of these networks do not show significant differences that allow us to affirm that one topology is really better than the others.
  • A multilayer ANN can be used as a tool to, through machine learning, make predictions about the mechanical behavior of a piece of aluminum alloy subjected to forming processes. In theory, these networks can learn to approximate any nonlinear function if the input data set is large enough and has enough perceptrons [37].
This study presents a methodology that allows optimizing the topology of a neural network whose task is to make predictions about the U T S using techniques based on machine learning. In the same way, it would be possible to use this same approach with other properties and even with other materials.
Since this scheme of work is shown to work adequately, a similar method could be applied to test other more complex network architectures. There are a multitude of architectures related to machine learning that allow different problems to be addressed [33].

Author Contributions

Conceptualization, D.M., A.R.-P. and A.M.C.; methodology, D.M.; software, D.M.; validation, D.M., A.R.-P. and A.M.C.; formal analysis, D.M.; investigation, D.M.; resources, A.R.-P. and A.M.C.; data curation, D.M.; writing—original draft preparation, D.M.; writing—review and editing, D.M., A.R.-P. and A.M.C.; visualization, D.M.; supervision, A.R.-P. and A.M.C.; project administration, A.R.-P. and A.M.C.; funding acquisition, A.R.-P. and A.M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was developed within the framework of the Doctorate Program in Industrial Technologies of the UNED and was funded by the Annual Grants Call of the E.T.S.I.I. of the UNED via the projects of reference 2021-ICF07 and 2021-ICF08, as well as by the Innovation Teaching Project of the GID2016-28 focused on “Reliability and Advanced Failure Prognosis in industrial applications applied to the teaching of Materials Technology and Processing”.

Data Availability Statement

All data can be found in Appendix A.

Acknowledgments

We extend our acknowledgments to the Research Group of the UNED “Industrial Production and Manufacturing Engineering (IPME)” and the Industrial Research Group “Advanced Failure Prognosis (AFP) for Engineering Applications”. We also thank Matmatch GmbH for freely supplying all the material data employed to accomplish this study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations and symbols are used in this manuscript:
ADAMAdaptive moment estimation
AIArtificial intelligence
ANNArtificial neural networks
β n ADAM algorithm parameter
ϵ ADAM stability factor
η ADAM step size
mADAM first moment estimate
MLMachine learning
U T S Ultimate tensile strength
S x Standard deviation
ν ADAM second moment estimate
x ¯ Average
Y S Yield strength

Appendix A. Numerical Results

Table A1 shows the average predictive error after testing each of the 400 topologies through 10 independent training–prediction iterations.
Table A1. Average predictive error (%) for each topology as a function of the amount of nodes in the hidden layers.
Table A1. Average predictive error (%) for each topology as a function of the amount of nodes in the hidden layers.
Nodes in the First Hidden Layer
102030405060708090100110120130140150160170180190200
Nodes in the second hidden layer1095.2874.7944.1134.8928.7423.0720.5619.2617.9613.7913.9812.8612.5511.7011.6010.488.689.429.8610.62
2057.5739.9936.4229.0822.6721.5016.9515.3415.5214.2912.6811.6410.319.298.849.578.338.309.076.79
3042.9232.0726.0325.7817.9321.1814.8214.8113.2010.3810.379.5710.0611.147.348.087.896.377.007.13
4038.8728.1121.5520.7815.4414.8213.3312.8211.559.2612.819.139.028.568.568.278.436.517.195.98
5028.2624.8119.7918.4416.2515.3914.4012.239.7110.059.917.869.378.649.636.747.107.236.115.38
6026.0522.4719.3816.8314.0313.5011.1910.7010.868.809.889.547.886.845.936.977.587.846.624.57
7020.2118.3214.5613.7814.4812.1910.2210.158.957.937.887.487.535.797.677.126.136.054.505.42
8018.7316.2016.9214.0310.4110.178.797.9610.608.249.346.658.236.886.144.944.896.914.355.01
9015.9013.5013.7112.519.3310.4310.0211.508.808.426.316.615.625.115.775.784.366.284.355.51
10015.5213.3014.139.3311.159.139.858.457.536.235.997.455.544.875.554.414.204.945.773.66
11012.0513.7412.869.589.398.338.946.428.336.918.195.265.826.776.435.263.945.086.095.51
12013.6912.3410.6910.4310.487.337.608.606.016.867.896.784.874.396.546.045.305.775.574.12
13011.459.9410.879.457.138.466.686.786.495.034.854.526.305.105.173.743.603.543.664.19
14010.578.269.728.368.627.706.716.836.016.825.566.656.455.096.055.634.514.303.893.49
15010.1310.598.208.537.546.265.776.436.936.596.985.135.065.763.645.473.314.235.183.08
1608.289.588.067.287.996.886.057.306.215.487.415.685.275.623.545.483.284.063.023.96
1709.197.776.657.987.597.165.017.286.324.146.296.135.384.245.825.233.315.393.033.78
1809.887.688.917.666.365.046.654.535.183.983.785.034.535.423.353.154.414.953.742.77
1907.018.056.315.505.017.554.654.243.953.894.875.723.725.474.104.055.123.225.013.64
2008.727.925.307.154.745.735.266.823.794.905.904.453.255.074.272.883.074.813.673.49
Table A2 shows the standard deviation of the predictive error for each of the topologies that are tested.
Table A2. Standard deviation of the predictive error (%) for each topology as a function of the amount of nodes in the hidden layers.
Table A2. Standard deviation of the predictive error (%) for each topology as a function of the amount of nodes in the hidden layers.
Nodes in the First Hidden Layer
102030405060708090100110120130140150160170180190200
Nodes in the second hidden layer1058.9628.7561.0941.9513.4223.406.319.9414.8113.5713.2513.0012.763.603.215.285.725.054.874.62
2019.3347.5613.3916.328.6218.356.165.147.1211.864.155.408.938.263.322.755.088.612.393.99
3022.0116.4126.9310.9522.2117.568.284.4012.884.2310.874.808.398.288.642.876.437.932.212.53
4017.0731.2324.1520.378.877.796.706.033.403.283.884.1911.494.302.867.132.303.653.237.03
509.5211.569.357.926.876.956.255.545.349.624.037.338.893.876.296.883.232.393.704.72
6022.6510.437.954.664.4612.1512.459.2611.203.042.953.998.778.282.872.187.993.063.332.24
7022.0518.314.7813.1712.534.214.6410.875.034.912.943.792.706.733.785.197.101.976.824.71
8017.4821.584.266.416.293.915.555.113.363.972.534.491.902.502.321.785.581.822.171.97
9017.6916.304.2412.629.195.2410.072.524.042.647.913.561.942.662.292.822.846.091.795.04
1004.5012.014.1312.2511.177.814.868.297.723.516.663.557.052.391.641.472.653.203.821.23
1106.3111.919.644.325.364.712.596.476.067.082.103.622.272.592.232.132.024.481.263.52
1205.5711.892.958.357.602.903.012.706.681.823.504.765.994.372.145.544.872.354.481.12
1303.809.368.359.857.083.393.906.645.352.091.595.571.441.842.711.922.391.191.791.70
14011.682.804.818.222.503.563.933.522.455.785.554.944.243.922.511.893.454.524.181.53
15010.643.943.587.687.392.302.576.583.496.997.122.162.554.931.681.184.502.604.162.29
1603.199.657.716.338.705.793.602.451.632.272.081.523.944.451.371.441.591.051.702.45
1705.249.333.483.423.287.523.632.183.254.413.993.952.812.443.323.505.102.951.342.77
18010.023.722.334.276.645.745.602.106.985.181.971.551.775.351.632.554.051.941.591.90
1902.292.303.785.292.607.362.212.182.725.014.605.891.782.591.074.192.144.174.711.38
2002.832.295.437.186.582.211.553.092.051.212.775.385.401.642.582.011.482.002.621.74

References

  1. Merayo, D.; Rodríguez-Prieto, A.; Camacho, A.M. Prediction of the Bilinear Stress-Strain Curve of Aluminum Alloys Using Artificial Intelligence and Big Data. Metals 2020, 10, 904. [Google Scholar] [CrossRef]
  2. Alam, T.; Ansari, A.H. Review on Aluminium and Its Alloys for automotive applications. Int. J. Adv. Technol. Eng. Sci. 2017, 5, 278–294. [Google Scholar]
  3. Merayo, D.; Rodríguez-Prieto, A.; Camacho, A.M. Prediction of mechanical properties by artificial neural networks to characterize the plastic behavior of aluminum alloys. Materials 2020, 13, 5227. [Google Scholar] [CrossRef]
  4. Ashkenazi, D. How aluminum changed the world: A metallurgical revolution through technological and cultural perspectives. Technol. Forecast. Soc. Chang. 2019, 143, 101–113. [Google Scholar] [CrossRef]
  5. Hahn, G.; Rosenfield, A. Metallurgical factors affecting fracture toughness of aluminum alloys. Metall. Trans. A 1975, 6, 653–668. [Google Scholar] [CrossRef]
  6. Yogo, Y.; Sawamura, M.; Iwata, N.; Yukawa, N. Stress-strain curve measurements of aluminum alloy and carbon steel by unconstrained-type high-pressure torsion testing. Mater. Des. 2017, 122, 226–235. [Google Scholar] [CrossRef]
  7. Kaufman, J.G. Introduction to Aluminum Alloys and Tempers; ASM international: Russell Township, Geauga County, OH, USA, 2000. [Google Scholar]
  8. Kamaya, M.; Kawakubo, M. A procedure for determining the true stress–strain curve over a large range of strains using digital image correlation and finite element analysis. Mech. Mater. 2011, 43, 243–253. [Google Scholar] [CrossRef]
  9. Hu, J.; Marciniak, Z.; Duncan, J. Mechanics of Sheet Metal Forming; Elsevier: Amsterdam, The Netherlands, 2002. [Google Scholar]
  10. Hosford, W.F.; Caddell, R.M. Metal Forming: Mechanics and Metallurgy; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  11. Zhao, Y.; Song, B.; Pei, J.; Jia, C.; Li, B.; Linlin, G. Effect of deformation speed on the microstructure and mechanical properties of AA6063 during continuous extrusion process. J. Mater. Process. Technol. 2013, 213, 1855–1863. [Google Scholar] [CrossRef]
  12. Ilyas, M.; Hussain, G.; Rashid, H.; Alkahtani, M. Influence of Forming Parameters on the Mechanical Behavior of a Thin Aluminum Sheet Processed through Single Point Incremental Forming. Metals 2020, 10, 1461. [Google Scholar] [CrossRef]
  13. Merayo, D.; Rodríguez-Prieto, A.; Camacho, A.M. Prediction of Physical and Mechanical Properties for Metallic Materials Selection Using Big Data and Artificial Neural Networks. IEEE Access 2020, 8, 13444–13456. [Google Scholar] [CrossRef]
  14. Ghosh, I.; Das, S.K.; Chakraborty, N. An artificial neural network model to characterize porosity defects during solidification of A356 aluminum alloy. Neural Comput. Appl. 2014, 25, 653–662. [Google Scholar] [CrossRef]
  15. Zhao, D.; Ren, D.; Zhao, K.; Pan, S.; Guo, X. Effect of welding parameters on tensile strength of ultrasonic spot welded joints of aluminum to steel–By experimentation and artificial neural network. J. Manuf. Process. 2017, 30, 63–74. [Google Scholar] [CrossRef]
  16. Haghdadi, N.; Zarei-Hanzaki, A.; Khalesian, A.; Abedi, H. Artificial neural network modeling to predict the hot deformation behavior of an A356 aluminum alloy. Mater. Des. 2013, 49, 386–391. [Google Scholar] [CrossRef]
  17. Huang, J.; Liew, J.; Ademiloye, A.; Liew, K. Artificial intelligence in materials modeling and design. Arch. Comput. Methods Eng. 2020, 28, 3399–3413. [Google Scholar] [CrossRef]
  18. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  19. Merayo, D.; Rodriguez-Prieto, A.; Camacho, A.M. Comparative analysis of artificial intelligence techniques for material selection applied to manufacturing in Industry 4.0. Procedia Manuf. 2019, 41, 42–49. [Google Scholar] [CrossRef]
  20. Jackson, P.C. Introduction to Artificial Intelligence; Courier Dover Publications: Mineola, NY, USA, 2019. [Google Scholar]
  21. Chen, C.T.; Gu, G.X. Machine learning for composite materials. MRS Commun. 2019, 9, 556–566. [Google Scholar] [CrossRef] [Green Version]
  22. Joshi, P. Artificial Intelligence with Python; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
  23. Dimiduk, D.M.; Holm, E.A.; Niezgoda, S.R. Perspectives on the impact of machine learning, deep learning, and artificial intelligence on materials, processes, and structures engineering. Integr. Mater. Manuf. Innov. 2018, 7, 157–172. [Google Scholar] [CrossRef] [Green Version]
  24. Balachandran, P.V.; Xue, D.; Theiler, J.; Hogden, J.; Gubernatis, J.E.; Lookman, T. Importance of feature selection in machine learning and adaptive design for materials. In Materials Discovery and Design; Springer: Berlin/Heidelberg, Germany, 2018; pp. 59–79. [Google Scholar]
  25. Feng, S.; Zhou, H.; Dong, H. Using deep neural network with small dataset to predict material defects. Mater. Des. 2019, 162, 300–310. [Google Scholar] [CrossRef]
  26. Mueller, T.; Kusne, A.G.; Ramprasad, R. Machine learning in materials science: Recent progress and emerging applications. Rev. Comput. Chem. 2016, 29, 186–273. [Google Scholar]
  27. Lefik, M.; Boso, D.; Schrefler, B. Artificial neural networks in numerical modelling of composites. Comput. Methods Appl. Mech. Eng. 2009, 198, 1785–1804. [Google Scholar] [CrossRef]
  28. Sha, W.; Edwards, K. The use of artificial neural networks in materials science based research. Mater. Des. 2007, 28, 1747–1752. [Google Scholar] [CrossRef]
  29. Davis, J.R. Alloying: Understanding the Basics; ASM International: Russell Township, Geauga County, OH, USA, 2001. [Google Scholar]
  30. Naik, D.L.; Sajid, H.U.; Kiran, R. Texture-based metallurgical phase identification in structural steels: A supervised machine learning approach. Metals 2019, 9, 546. [Google Scholar] [CrossRef] [Green Version]
  31. Honysz, R. Modeling the Chemical Composition of Ferritic Stainless Steels with the Use of Artificial Neural Networks. Metals 2021, 11, 724. [Google Scholar] [CrossRef]
  32. Qian, L.; Winfree, E.; Bruck, J. Neural network computation with DNA strand displacement cascades. Nature 2011, 475, 368–372. [Google Scholar] [CrossRef]
  33. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. The Aluminum Association. Designations and Chemical Composition Limits for Aluminum Alloys in the Form of Castings and Ingot; The Aluminum Association Inc.: Arlington, VA, USA, 2006. [Google Scholar]
  35. Matmatch GmbH. Matmatch. 2021. Available online: https://matmatch.com/ (accessed on 15 June 2021).
  36. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations (ICLR 15), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  37. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
Figure 1. Overview of the methodology.
Figure 1. Overview of the methodology.
Metals 11 01289 g001
Figure 2. Multilayer artificial neural network scheme.
Figure 2. Multilayer artificial neural network scheme.
Metals 11 01289 g002
Figure 3. Average predictive error [%] (trimmed above 20 % ).
Figure 3. Average predictive error [%] (trimmed above 20 % ).
Metals 11 01289 g003
Figure 4. Predictive error [%] as a function of the amount of nodes in both hidden layers.
Figure 4. Predictive error [%] as a function of the amount of nodes in both hidden layers.
Metals 11 01289 g004
Figure 5. Standard deviation of the predictive error [%] (trimmed above 10 % ).
Figure 5. Standard deviation of the predictive error [%] (trimmed above 10 % ).
Metals 11 01289 g005
Table 1. Statistical information about the input dataset.
Table 1. Statistical information about the input dataset.
UTS [ MPa ]Al [%]Zn [%]Cu [%]Si [%]Fe [%]Mn [%]Mg [%]Ti [%]Cr [%]Other [%]
Average246.395.10.61.00.50.50.51.40.20.20.2
Std. dev.115.12.61.31.60.60.20.41.40.20.10.3
Median230.095.90.30.30.40.50.50.90.10.10.0
Min.40.083.10.00.00.00.00.00.00.00.00.0
Max.700.099.77.86.312.31.61.65.50.80.92.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Merayo, D.; Rodríguez-Prieto, A.; Camacho, A.M. Topological Optimization of Artificial Neural Networks to Estimate Mechanical Properties in Metal Forming Using Machine Learning. Metals 2021, 11, 1289. https://doi.org/10.3390/met11081289

AMA Style

Merayo D, Rodríguez-Prieto A, Camacho AM. Topological Optimization of Artificial Neural Networks to Estimate Mechanical Properties in Metal Forming Using Machine Learning. Metals. 2021; 11(8):1289. https://doi.org/10.3390/met11081289

Chicago/Turabian Style

Merayo, David, Alvaro Rodríguez-Prieto, and Ana María Camacho. 2021. "Topological Optimization of Artificial Neural Networks to Estimate Mechanical Properties in Metal Forming Using Machine Learning" Metals 11, no. 8: 1289. https://doi.org/10.3390/met11081289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop