1. Summary
Heat treatment operations are used to change the material properties of components. The controlled input or extraction of heat is essential during the whole process. Quenching processes are typically operated in different liquid or gaseous media to achieve the desired material properties by following the adequate heat transfer rates. A commonly used industrial hardening procedure is immersion quenching. In this process, the work pieces are heated to the desired temperature and then immersed in the liquid quenching medium.
The heat transfer phenomena occurring during the quenching process may pass three different boiling regimes [
1]. In the first stage, the component is surrounded by a vapor film, which insulates the work piece. The heat transfer is moderated in this stage and takes place by radiation and conduction through the vapor film.
In the second stage, the surface partially gets in contact with the coolant when the temperature reaches the Leidenfrost point and nucleate boiling occurs with the fastest cooling rate during the whole heat transfer process. Then, the surface cools down below the boiling point (or range) and only pure convection occurs. In this stage, the heat transfer mainly is controlled by the quenchant’s specific heat and thermal conductivity, and temperature differences between the surface and the fluid combined with fluid flow.
To achieve the desired mechanical properties of the components, it is necessary to know the characteristics of the Heat Transfer Coefficient (HTC) describing the heat exchange between the work piece and the surrounding cooling medium. The prediction of the HTC is a typical ill-posed task, which cannot be solved by direct numerical methods. In recent years, this Inverse Heat Transfer Problem (IHCP) has been studied extensively [
2,
3,
4,
5,
6], presenting various heuristic solutions based on Genetic Algorithms (GA) [
7,
8] or Particle Swarm Optimization (PSO) [
7,
9]. There are also promising results in the development of the sparse representation for ill-posed problems [
10].
In the case of these population based methods, each instance (chromosome in GA/particle in PSO) represents a HTC function candidate, and its fitness is calculated by the squared difference between some real-world measurements and the results of the generated temperature data using the HTC values encoded in the instances. There are promising results; however, these processes usually need thousands of instances and iterations. The number of fitness function evaluations (which is the most computationally intensive part of the search) is the multiplication of the population size and the iteration count; for this reason, these methods are usually very time-consuming. A whole search takes hours or days and (as is common for heuristic methods) hundreds of executions are necessary to have stable results.
Artificial Neural Networks (ANN) were motivated by the already existing biological structures of the brain [
11], having powerful capabilities for tasks such as learning, pattern matching and adaptation. As the real biological brain, the basic construction units of ANNs are artificial neurons connected by weighted edges. In the simplest architecture, the input and output neurons are connected through one or more layers of hidden ones. In the case of densely connected ANNs, all neurons of a given hidden or output layer are connected to all neurons of the previous layer. There are several advanced architectures (convolutional neural networks, etc.) and the choice between these needs a lot of research and experiments. In the case of feed-forward neural networks, the information moves in only one direction: from the input neurons to the output nodes through the hidden ones. If we know the appropriate weight values for edges, the feed-forward operation of the ANN is given by Equation (
1).
where
X is the vector of the input data;
W is the matrix of edge weights;
b is the vector of bias constant values;
f is an activation function; and
Y is the vector containing the prediction of the network.
X and f are known but the values of the W and b variables needs some preprocessing. There are various techniques to determine the values of these weights, one of the most widely used being the back-propagation algorithm. It is a supervised training method, which means that it learns from valid input and output data pairs, called the training data. The back-propagation algorithm starts with random W and b values, feeds the network with the input data and measures the difference between the prediction of the network (Y) and the known valid output (). After that, the error is propagated back to the previous layers recursively and the weights of edges are adjusted according to this. This process is repeated until the loss (difference between the desired and actual output) is satisfactory. After this learning process, it is possible to save the state of the ANN, and it is able to do predictions for new inputs.
ANNs have already been used by researchers of the field, and there are superior results in reducing the material parameters for selected material functions [
12]. Our presented feed-forward network is applicable to do the reverse engineering task of the IHCP. Feeding the measured temperature records to the network as an input, it may be able to predict the main characteristics of the HTC. Both the temperature and the HTC series are based on time, therefore the input and the output of the network should be quite large. Nowadays, the availability of modern GPUs makes it possible to design quite large dense networks, and train them in a tolerable time.
Modern GPUs can be considered as general purpose architectures with a large number of simple processing cores. Nowadays, these devices are the key factors to solving highly computing-intensive tasks. GPU hardware has two particular strengths: high number of cores and memory bandwidth. This new programming model forces the programmer to divide the problem into a block of threads and running thousands of these. A problem is suitable for GPU acceleration only if it is able to adapt to these requirements and utilize the benefits. Focusing on the topic of this paper, there are two areas where it is possible to take advantage of these: running complex simulations with Finite-Element Method (FEM) and training ANNs.
Beyond the computational demands, the other constraint for the efficient training process is the availability of enough training data. To train a network with hundreds/thousands of hidden neurons, millions of training data pairs are required. Although there are several real-world measurements, the number of them is far from satisfactory. The only way to have enough data is through the generation of corresponding HTC/temperature series pairs. This also raises some problems:
It is necessary to construct a model for building potential HTC series.
The temperature history for a given HTC can be generated by a cooling simulation process, which is very time consuming in the case of millions of inputs.
The rest of this paper is structured as follows:
Section 2 contains the description of the database.
Section 3 presents the solutions for the raised problems (HTC generation model, GPU accelerated simulations).
Section 4 focuses on the conclusions and further development possibilities.
4. Usage Notes
This paper presents a database containing HTC functions and the corresponding temperature records. The authors designed a novel method for generating valid HTC functions based on several real-world measurements. The corresponding temperature function is the result of a simulation process, which was accelerated by graphics cards, due to high computational demands. This database makes it possible to design novel methods to solve the IHCP. In the case of machine-learning-based approaches, the proposed database is directly usable for training, testing and validation of solutions.
The authors presented several methods to determine the HTC based on temperature data. These are mostly based on time consuming, heuristic searches (genetic algorithms [
19,
20], particle swarm optimization [
21,
22,
23], fireworks [
24], etc.). As an alternative, based on a similar generated database, it was possible to develop another approach based on Universal Function Approximator networks. The authors designed a simple feed-forward dense ANN to solve the IHCP [
25]. This model contains 120 input neurons and 101 output neurons. The activation function was the sigmoid function and the loss function was designed as the mean squared error between the prediction and the training output. The network used AdamOptimizer with learning rate 0.01. The authors ran several tests with various hidden layer sizes, and the best results was given by the layer of 50 neurons. This ANN was able to estimate the HTC based on the temporal data series (
Figure 4).
However, it is evident that the accuracy of this method is not satisfactory. It needs further development to find the appropriate network architecture, the optimal number of nodes, etc. It is a time consuming process, because training a network, with such a large amount of data, would take multiple weeks, and this process is necessary to test each potential architecture. This was the reason the authors decided to open the proposed database, for all of the researchers in this field.