Next Article in Journal
Use of a Glaciogene Marine Clay (Ilulissat, Greenland) in a Pilot Production of Red Bricks
Next Article in Special Issue
Application of Linear Mixed-Effects Model, Principal Component Analysis, and Clustering to Direct Energy Deposition Fabricated Parts Using FEM Simulation Data
Previous Article in Journal
Effects of Minor Zn Dopants in Sn-10Bi Solder on Interfacial Reaction and Shear Properties of Solder on Ni/Au Surface Finish
Previous Article in Special Issue
Multiscale Simulation of Laser-Based Direct Energy Deposition (DED-LB/M) Using Powder Feedstock for Surface Repair of Aluminum Alloy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Recurrent Neural Networks-Based Surrogate Model for Thermal History and Melt Pool Characteristics in Directed Energy Deposition

1
Department of Mechanical Engineering, Missouri University of Science and Technology, Rolla, MO 65409, USA
2
Intelligent Systems Center, Missouri University of Science and Technology, Rolla, MO 65409, USA
3
National Strategic Planning and Analysis Research Center (NSPARC), Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39759, USA
*
Author to whom correspondence should be addressed.
Materials 2024, 17(17), 4363; https://doi.org/10.3390/ma17174363
Submission received: 30 July 2024 / Revised: 23 August 2024 / Accepted: 27 August 2024 / Published: 3 September 2024

Abstract

:
In directed energy deposition (DED), accurately controlling and predicting melt pool characteristics is essential for ensuring desired material qualities and geometric accuracies. This paper introduces a robust surrogate model based on recurrent neural network (RNN) architectures—Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit (GRU). Leveraging a time series dataset from multi-physics simulations and a three-factor, three-level experimental design, the model accurately predicts melt pool peak temperatures, lengths, widths, and depths under varying conditions. RNN algorithms, particularly Bi-LSTM, demonstrate high predictive accuracy, with an R-square of 0.983 for melt pool peak temperatures. For melt pool geometry, the GRU-based model excels, achieving R-square values above 0.88 and reducing computation time by at least 29%, showcasing its accuracy and efficiency. The RNN-based surrogate model built in this research enhances understanding of melt pool dynamics and supports precise DED system setups.

1. Introduction

Directed energy deposition (DED) is an additive manufacturing (AM) technique for metals that creates parts by melting metal feedstocks with concentrated thermal energy [1,2]. Compared to the laser powder bed fusion process, DED is more cost-efficient and capable of producing parts with greater efficiency and adaptability [3]. These remarkable characteristics make DED an attractive option for rapid prototyping, manufacturing functionally graded materials, and repairing high-value components [4]. Specifically, DED excels in repairing worn or damaged components, thereby extending the service life of industrial and aerospace equipment by restoring structural integrity and functionality [5]. Over the last decade, DED’s usage has expanded in the defense, manufacturing, and automotive industries [6]. For instance, DED has been employed to repair airfoils in airplane engines [7]. The DED market size is projected to reach more than USD 700 million by 2025 [8]. Despite DED’s advantages over other AM techniques, challenges remain in minimizing defects during printing. Factors contributing to defect generation include gas entrapment, insufficient melting, and unstable melt pool generation [9,10]. Comprehending the thermal behavior and melt pool generation in relation to process parameters is essential for reducing defects during DED printing [11].
In DED, the melt pool is defined as the regime where metal particles are melted during laser–material interaction, generating an orbicular droplet [2,12]. Within the molten pool, the thermal distribution plays a crucial role in defining the microstructure and defects of the manufactured part [13]. In the case of a small molten pool, a relatively reduced thermal distribution can result in inadequate adjacent melt pools’ overlap, leading to the lack of fusion defects [14]. Additionally, an irregular molten pool caused by elevated energy density can cause keyhole formation, leading to substantial material vaporization [15,16]. The dense plasma plume results in a recoil force on the molten material which leads to gas entrapment, creating defects [17,18]. Attaining and monitoring optimal thermal distribution is essential for an appropriate melting flow within the molten pool [19]. DED often encounters non-uniform thermal distribution along with rapid heating and slow cooling cycles, developing anisotropic microstructures, characterized by porosity and uneven grains [20]. The uneven grains affect the mechanical properties negatively [21,22]. For the DED process, the thermal distribution within the molten pool can be monitored using sensors such as thermocouples, IR cameras, and pyrometers. IR camera, in combination with image processing, was applied to observe thermal distribution within the melt pool. Comparatively reliable results were obtained at a 100 kHz sampling rate as well as 20 µm resolution [23]. In addition, the IR camera and pyrometers can monitor radiation from moving bodies and capture thermal distribution without surface contact, thus assisting in situ monitoring of the DED process [24]. On the other hand, thermocouples are flexible and resource-effective compared to other sensing devices. However, direct contact is required for thermocouples, which limits their usage [25].
To predict molten pool thermal distribution, researchers have explored multi-physics and machine learning-based approaches [26,27]. In multi-physics techniques, FEM and analytical methods have been elaborated. On the one hand, an extensive multi-physics FEM model may provide reliable results on the verge of computational [28]. On the other hand, a simplified FEM model faces limitations owing to incomplete multi-physics involved in simulation analysis [29,30]. Furthermore, the accuracy of the FEM model is also affected by factors such as element type, initial and boundary conditions, and meshing size [29]. In addition, the analytical techniques utilize multi-physics equations solved based on the initial and boundary conditions, simulating the thermal distribution and melt pool formation in the DED process [31]. These methods are unreliable due to mass and volume variation with time and uncertainties involved in DED processes.
Machine learning (ML)-based approaches have demonstrated significant advantages in modeling the intricate thermal distributions and melt pool formations essential to Directed Energy Deposition (DED), achieving solid accuracy and efficiency [32]. These approaches significantly reduce the high costs associated with extensive experimental procedures in research and development and alleviate the burden of lengthy computational times typically required by traditional simulation methods [33]. ML-based models are fundamentally data-driven, analyzing the relationship between each process parameter, like laser power, scanning speed, powder feed rate, and its outputs, such as thermal distribution and mechanical properties [34]. The data for training these models are usually collected from experiments or simulations, and the predictive insights provided by ML models greatly enhance the scalability of applications across various scenarios [35]. Various ML algorithms, such as SVM, clustering, and artificial neural networks, have been utilized to predict melt pool characteristics [36,37]. In addition, the defects of printed parts can be detected by predicting the melt pool dimension [38]. Despite these advancements, the dynamics of melt pools pose complex challenges. Primarily, the acquisition of large, robust datasets necessary for training these models is prohibitively expensive and time-intensive [39]. Additionally, current research inadequately addresses the sequential nature of melt pool dynamics, highlighting a critical need and understanding for more sophisticated applications of recurrent neural network (RNN) algorithms. Furthermore, the computational demand and memory requirements of these models also need optimization to enhance their reliability and robustness.
To address these challenges, this research introduces a pioneering RNN-based surrogate model designed specifically to predict both the thermal history and the geometric characteristics of melt pools in DED. The comprehensive framework that incorporates a factorial design of experiments, multi-physics modeling, refined data processing, and rigorous surrogate model training, evaluation, and comparison are proposed for this research. This innovative approach deepens the understanding of complex melt pool dynamics and significantly advances the operational capabilities of DED systems. It marks a substantial progression in the field, enhancing the precision and efficiency of ML-based surrogate models and facilitating their practical application in optimizing DED processes.

2. Methodology

The method used to develop the robust machine learning-based surrogate model for predicting melt pool thermal history and characteristics is presented in Figure 1. The process begins with the design of experiments, focusing on various parameters such as geometry, material, laser power, scanning speed, and hatch spacing. This is followed by multi-physics modeling, which includes finite element (FE) simulations and thermal modeling with temperature-dependent material properties. Key data points such as melt pool peak temperature and dimensions are extracted for building the surrogate model. In this research, the surrogate model is machine learning-based, employing multiple machine learning algorithms including Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (Bi-LSTM), and Gated Recurrent Unit (GRU) to ensure accurate predictions of melt pool thermal history and dimensions. The evaluation and comparison of each algorithm are based on R-square values, Root Mean Square Error (RMSE), and Mean Absolute Error (MAE), ensuring robust model performance. A detailed description of each section is provided in the following content.

2.1. Design of Experiments

In this research, a factorial design of experiments (DOE) is employed, involving three factors, each at three different levels. This methodical approach is designed to thoroughly investigate the interactions and effects of the variables on the outcomes. The chosen factors, critical to the Directed Energy Deposition (DED) process, include laser power (W), scanning speed (mm/s), and hatching space (%). Specifically, the laser power varies between 600 and 1000 watts, the scanning speed ranges from 2 to 6 mm per second, and the hatching space is adjusted from 40% to 60%. These parameters are selected based on their significant influence on the melt pool thermal distribution. A total of 27 experimental runs are conducted to explore the full factorial space, providing a comprehensive understanding of the process dynamics. The schematic detailing these experiments and their configurations is depicted in Figure 2.
In this research, Ti-6Al-4V is utilized as both the substrate material and the powder. Figure 3 depicts the simulation setup and the laser tool path, featuring a substrate thickness of 6.35 mm. This design incorporates four vertical single laser tracks that run from top to bottom. The total width of the deposit varies from 4.4 mm to 5.6 mm depending on the hatching space, with a length of 15 mm and a thickness of 0.5 mm. The red-colored line indicates that the laser is active, while the purple dashed line signifies that the gantry is moving to the next track and the laser is turned off. In this setup, cantilever clamping, shown in green, extends from the left end to 20 mm. Table 1 details the process parameters for the factorial design of experiments, while Table 2 presents the complete design used for the subsequent multi-physics simulation analysis.

2.2. Multi-Physics Simulation

After designing the experiments, each of the 27 runs was simulated in Abaqus CAE using the AM Modeler plug-in. For the thermal simulation, temperature-dependent material properties of Ti6Al4V were used, as shown in Figure 4.
To perform the calculation of thermal distribution during laser and material deposition, 3D heat conduction equation was employed over the domain shown as T ( x , y , z , t ) while incorporating appropriate initial and boundary conditions as shown in Equation (1) [41,42].
ρ C T t = x k T x + y k T y + z k T z + Q
where ρ is density, C is specific heat, T is temperature, t is time, k is thermal conductivity, and Q is heat flux in the form of laser heat source. To calculate heat loss due to convection, Newton’s law cooling was employed as shown in Equation (2) [41,42].  
q c o n v = h ( T T e n v )
where h is convective coefficient which is 30.0 (W/m2·K4), T is shown as temperature at any given time on the surface of the substrate, and T e n v is room temperature which is 25.0 °C. Heat loss due to radiation is calculated using the Stephen–Boltzmann radiation law as shown in Equation (3) [41,42].
q r a d = ϵ σ T 4 T e n v 4
where ϵ is known as emissivity and its value is taken as 0.8, σ represents the Stephen–Boltzmann constant with a value of 5.67 × 10 8 W/m2·K4. For body heat flux, Goldaks’s double ellipsoid heat distribution is used as shown in Equation (4) [42,43].
Q = 6 3 P η a b c π e x p 3 x 2 a 2 3 y 2 b 2 3 ( z + V s t ) 2 c 2
where P is power in Watts, η is the efficiency of laser absorption and taken as 0.6, a and b values are taken as 1 and 2 mm, c for both front and back are taken 1 and 2 mm, respectively. V s is the scan speed with which the laser moves in the z-direction.

2.3. Data Generation and Extraction

After solving all the given designs of experiments, data were extracted from each of the ODB files using a Python script. Two types of data were extracted: maximum temperature and melt pool dimensions. Therefore, separate scripts were used for each type. For example, run number 27 is shown in Figure 5a during material deposition and analysis.
For the maximum temperature, the highest temperature value was extracted for each increment from each frame as shown in Figure 5b during Run27 simulation. The same concept was applied to extract and calculate the melt pool dimensions. For each successful increment solved during the analysis, all nodal locations in all directions with values equal to or above 1605 °C were extracted. Once extracted for the specific increment, the location with the highest value in length was subtracted from the location with the lowest value of length, essentially providing the relevant dimension. The same method was employed for extracting the melt pool length, width, and depth as shown in Figure 5c.

2.4. Machine Learning Models

After extracting data from finite element simulations, four machine learning algorithms—Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (Bi-LSTM), and Gated Recurrent Units (GRUs)—are prepared to build a surrogate model for predicting the thermal history and dimensions of the melt pool. Both the accuracy and computational time of these algorithms are considered to construct a robust machine learning-based surrogate model. The subsequent sections describe each algorithm’s advantages and mathematical concepts.

2.4.1. Extreme Gradient Boosting (XGBoost)

XGBoost is recognized as one of the most effective applications of gradient-boosted decision trees [44]. Explicitly proposed to augment memory utilization and leverage hardware computational power, XGBoost significantly reduces accomplishment time while enhancing performance compared to other ML algorithms. The core concept of boosting involves sequentially constructing sub-trees from an original one, where each successive tree aims to lessen the errors of the preceding one. This iterative method updates the prior residuals, thereby minimizing the error of the cost function. Let us assume a dataset illustrated as [44]
D = { ( x i , y i ) x i R m , y i R } .
Here, m, x i , and  y i are the feature dimensions and the samples’ (i) responses, respectively. In addition, n represents the sample number ( | D | = n ). The forecasted output ( y i ) for an entry (i) is as follows [44]:
y i = k = 1 K f k ( x i ) , f k F .
In the above Equation, f k represents a standalone tree within F, and  f k ( x i ) indicates the projected result from the ith trial and kth tree. The objective function ( L ) is written as [44]
L = i = 1 n l ( y i , y ^ i ) + k = 1 K Ω ( f k ) .
By minimizing the actual function ( L ), the regression tree model functions ( f k ) are attained. The loss function ( l ( y i , y ^ i ) ) assesses the differentiation between estimated ( y ^ i ) and real outputs ( y i ). So, the term Ω is applied to prevent the overfitting issue by correcting the model intricacy, explained as [44]
Ω ( f k ) = γ T + 1 2 λ w 2 .
Here, γ as well as λ are regularization factors, T and w are designated as the number and score of the leaf, respectively. A Taylor series expansion with the second degree can be applied to estimate the target function. We assume that I j = { i q ( x i ) = j } is an insistence set of leaf j having q ( x ) as a permanent configuration. The optimum weights w j * of j and the subsequent quantity are estimated as [44]
w j * = g j h j + λ .
L * = 1 2 j = 1 T i I j g i 2 i I j h i + λ + λ T .
Here, the first- and second-order gradients for L are represented by g i and h i , respectively.  L can be applied as a quality index of the tree (q) so that the model is outstanding if the score is lower. It is not possible to consider the whole tree structure at a time. An excellent algorithm should resolve the challenge by initiating from an individual leaf and iteratively increasing branches. We assume that the right and left instance nodes are represented by I R as well as I L , respectively. Considering I = I R I L , the loss reduction can be written as following the split [44]:
L split = 1 2 i I L g i 2 i I L h i + λ + i I R g i 2 i I R h i + λ i I g i 2 i I h i + λ γ .
The XGBoost model employs numerous simple trees and assigns scores to leaf nodes during the splitting process.

2.4.2. Long Short-Term Memory (LSTM)

LSTM networks, an advanced type of recurrent neural networks, effectively address long-range dependencies in sequence data, crucial in scenarios like directed energy deposition processes. Characterized by three distinct gates—input, forget, and output—LSTMs manage information flow, selectively retaining or discarding data to precisely learn dependencies. The input state ( i t ) decides which new information to incorporate into the cell state ( c t ) and candidate state ( c ˜ t ), enabling the model to update its memory with relevant data. The forget gate ( f t ) selectively removes irrelevant information from the cell state to maintain the model’s focus on pertinent data through time. The output gate ( o t ) controls the flow of information from the cell state to the next layer or time step, determining what part of the hidden state ( h t ) is used to compute the output and pass to next iteration.
This architecture mitigates gradient vanishing and exploding issues, enhancing robustness and accuracy in predictive models and making LSTM ideal for capturing complex thermal and mechanical interactions in additive manufacturing. The LSTM architecture is shown in Figure 6. The operator ‘×’ denotes pointwise multiplication, and ’+’ denotes pointwise addition. The mathematical framework of LSTMs is presented in [45].
Forget gate:
f t = σ ( W f h h t 1 + W f x x t + P f · c t 1 + b f )
Input gate:
i t = σ ( W i h h t 1 + W i x x t + P i · c t 1 + b i )
c ˜ t = tanh ( W c h h t 1 + W c x x t + b c ˜ )
c t = f t · c t 1 + i t · c ˜ t
Output gate:
o t = σ ( W o h h t 1 + W o x x t + P o · c t + b o )
h t = o t · tanh ( c t )
Here, W f , W i , W c , and  W o are the weights of each input. The  x t , h t , and  y t are represented as input, hidden state (recurrent information), and output concerning time. Furthermore, the  f t is the forget cell starting from 0, P f , P i , and  P o are the peephole weights for f t , input, and output gates. The  c t denotes the LSTM cell state, and  b i , b f , b c ˜ , and  b o are the biases. Figure 7 shows the architecture of the series of LSTM structures.

2.4.3. Bidirectional Long Short-Term Memory (Bi-LSTM)

Bi-LSTM networks enhance traditional LSTM by processing data both forwards and backwards, enriching sequence context understanding. This dual-path approach not only boosts predictive accuracy in tasks like outcome prediction in directed energy deposition but also captures nuanced temporal dynamics from both past and future contexts. Despite their increased computational demands and potential for overfitting with small datasets, Bi-LSTMs remain valuable for thoroughly analyzing thermal and mechanical properties in AM. Leveraging LSTM strengths, they effectively manage long-term dependencies and mitigate gradient issues, providing a robust model for complex material behaviors. The Bi-LSTM architecture is defined in Figure 8, and the LSTM block within this architecture follows the structure shown in Figure 6. The mathematical expression is given in [45].
f t L = σ ( W f h L h t 1 L + W f x L h t L 1 + b f L )
i t L = σ ( W i h L h t 1 L + W i x L h t L 1 + b i L ) ,
c ˜ t L = tanh ( W c ˜ h L h t 1 L + W c ˜ x L h t 1 L + b c ˜ L ) ,
c t L = f t L · c t 1 L + i t L · c ˜ t L ,
o t L = σ ( W o h L h t 1 L + W o x L h t 1 L + b o L )
h t L = o t L · tanh ( c t L ) .
y t = W h y h t + W h y h t + b y
Here, h t L represents the output of the hidden state in the (L)th layer at time t. Equation (24) shows the output of architecture, where W h y denotes the weight of the forward pass, W h y indicates the weight of the backward pass, and  b y signifies the bias of the output.

2.4.4. Gated Recurrent Units (GRUs)

GRUs offer a streamlined alternative to LSTMs and Bi-LSTMs, ideal for modeling thermal histories in directed energy deposition. By employing just two gates—the reset and update gates—GRUs enhance computational efficiency and reduce model complexity, making them well-suited for scenarios with limited data or computational resources. The reset gate determines how much past information to forget. In contrast, the update gate decides how much of the current input should be incorporated, allowing the model to handle time dependencies dynamically. Although GRUs may struggle with extremely long dependencies, their ability to efficiently process sequential data without significant computational overhead keeps them highly relevant for improving predictive models in AM. Based on Figure 9, the following mathematical model has been proposed for GRU [45]:
Reset gate:
r t = σ ( W r h h t 1 + W r x x t + b r ) ,
Update gate:
z t = σ ( W z h h t 1 + W z x x t + b z ) ,
h ˜ t = tanh ( W h ˜ h ( r t · h t 1 ) + W h ˜ x x t + b h ˜ ) ,
h t = ( 1 z t ) · h t 1 + z t · h ˜ t .
Here, W r , W z , and  W h ˜ h are the weights, and  b r and b z are the biases.

2.4.5. Model Evaluation

In the model evaluation part, the performance of the surrogate model is assessed using several statistical metrics to ensure accuracy and reliability. These metrics include R-squared (R²), which measures the proportion of variance in the dependent variable that is predictable from the independent variables; Root Mean Square Error (RMSE), which provides the standard deviation of the prediction errors or residuals; and Mean Absolute Error (MAE), which represents the average magnitude of the errors in a set of predictions, without considering their direction. The mathematical formulas are presented as follows:
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
R M S E = 1 n i = 1 n ( y i y ^ i ) 2
M A E = 1 n i = 1 n | y i y ^ i |
Here, y i are the observed values, y ^ i are the predicted values, and  y ¯ is the mean of the observed values. Additionally, the computation time of the training model is also considered a factor in evaluating model performance.

3. Results and Discussion

3.1. Data Pre-Processing and Model Training

The data used to build the surrogate models for melt pool peak temperature and melt pool dimension in this research originated from 27 runs of multi-physics modeling, employing a three-level, three-factor factorial design of experiments. A total of 54,956 data points were extracted. For the melt pool peak temperature model, data points that did not reach the melting point of Ti-6Al-4V (1605 °C) or exceeded (3200 °C) were excluded. The vaporization point of Ti-6Al-4V is 3040 °C, but melt pool peak temperatures occasionally exceed this threshold. To accommodate most conditions during deposition, temperatures above the vaporization point were also considered. After cleaning, the dataset contained 38,867 peak temperature points, with 28,683 allocated for training and 10,184 for testing. The training and testing sets accounted for 73.8% and 26.2%, respectively. Figure 10 displays the training features: time, x position, y position, z position, laser power, scanning speed, and hatch space. Figure 11 depicts the training label for melt pool peak temperature. A detailed and clear description of the training label is shown in Figure 12. The peak temperature dramatically increases when the laser is on and drops when it is off. Each run consists of four tracks, and fluctuations occur during the movement in each track.
Regarding the melt pool dimension model, 27,772 data points were collected because data on melt pool dimensions are extracted only when the border of the melt pool exceeds 1605 °C, as described in Section 2. These points are divided into 20,182 (72.6%) for training and 7590 (27.4%) for testing. The features and labels are shown in Figure 13. In the melt pool dimension model, time (s), laser power (W), scanning speed (mm/s), hatching space (%), and peak temperature (°C) are considered as features, while melt pool length, width, and depth (mm) are considered as labels. After removing outliers, such as extremely high and low thermal histories, 19 runs of data remain: 14 runs are designated for training and 5 runs for testing. To mitigate the impact of disproportionately large values among the process parameters and training features, normalization is applied in the data pre-processing stage. The details of the data for the two surrogate models, the melt pool peak temperature model, and the melt pool dimension model are described in Table 3.
With regard to model training, the grid search method is applied to find the proper hyperparameters. For the XGBoost algorithm, the tree depth is set to five to avoid overfitting, with a learning rate of 0.01 to ensure steady convergence. The objective is defined as ‘reg:squarederror’ to minimize squared errors in regression tasks. L1 regularization (reg_alpha) is applied at 0.01 to promote parameter sparsity, and L2 regularization (reg_lambda) is set at 1 to reduce weight extremes. Both subsample and colsample_bytree are maintained at 0.8, allowing the model to learn from 80% of data and features, respectively, to prevent overfitting. The evaluation metric used is ‘rmse’, measuring prediction accuracy. Training involves 10,000 rounds, optimizing learning against computational demands. In terms of RNN algorithms, the hyperparameters for all LSTM, Bi-LSTM, and GRU algorithms are unified to ensure a fair comparison among the models. The sequence length of data is set to 10, batch size to 64, dropout rate to 0.25, hidden dimension to 100, number of layers to 2, learning rate to 0.001, and number of epochs to 100.

3.2. Melt Pool Peak Temperature Model

In this section, four different algorithms—XGBoost, Bi-LSTM, LSTM, and GRU—are applied to this research. To compare the pros and cons of tree-based versus RNN algorithms, the predicted results by XGBoost and Bi-LSTM are presented together in one figure. In terms of comparing the complexity of RNN algorithms, the results of LSTM and GRU are displayed together in another figure. Additionally, two specific runs, Run1 (Laser Power: 600W, Scanning Speed: 2 mm/s, Hatching Space: 60%) and Run27 (Laser Power: 1000 W, Scanning Speed: 6 mm/s, Hatching Space: 40%), are extracted and analyzed to facilitate a detailed comparison and enhance clarity. A comprehensive comparison of predictions by four algorithms is also included in this section.
Figure 14 depicts the comparison of Run1 among actual values and predicted values by Bi-LSTM and XGBoost. It shows that the melt pool peak temperatures predicted by Bi-LSTM closely match the actual peak temperatures. The results from XGBoost also demonstrate reasonably good prediction performance. However, in Run27, the predictions by XGBoost significantly deviate from the actual peak temperatures, especially in the second and third tracks, where the predictions have more fluctuation and are higher than the actual values. In contrast, the results from Bi-LSTM closely align with the actual values, demonstrating the robustness of the model built using the Bi-LSTM algorithm, as shown in Figure 15. In terms of the other two algorithms, LSTM and GRU, both achieve good predictions that closely fit the actual values. In the first track of Run1, both predictions are slightly lower than the actual values, yet the remaining predictions demonstrate good performance, as depicted in Figure 16. In Run27, shown in Figure 17, except for the fourth track, where the predictions are slightly lower than the actual values, most of the results closely match the actual values.
All predicted temperatures versus actual temperatures scatter plots are shown in Figure 18. It demonstrates that the predicted values by XGBoost are relatively less accurate than those produced by RNN algorithms. Most of the predicted results by RNN algorithms closely match the red line, which has a slope of one, indicating that the predictions are both accurate and robust. To compare the performance of the four algorithms, Table 4 reveals that the Bi-LSTM model has the highest accuracy, longest computational time, and greatest memory usage. Although XGBoost performs well in terms of computational time and memory usage, its accuracy is not robust enough to predict melt pool peak temperatures reliably. The accuracy of the LSTM and GRU models is similar; however, the computational time and memory usage of the GRU model are lower than those of the LSTM model by 20.7% and 5.4%, respectively. In conclusion, the Bi-LSTM model provides the most accurate results, while the GRU model offers comparable accuracy with lower computational time and memory usage.

3.3. Melt Pool Geometry Model

In this section, three surrogate models are presented: melt pool length, width, and depth, respectively. To clarify the comparison, results from Run23 and Run14 are extracted for discussion. Additionally, the overall results of the four algorithms are compared using scatter plots and a comprehensive table in the subsequent contents.

3.3.1. Melt Pool Length Model

In Run14 and Run23, the Bi-LSTM model consistently outperforms the XGBoost model in predicting melt pool length. As depicted in Figure 19, the Bi-LSTM model demonstrates superior accuracy in predicting higher melt pool lengths, particularly for data points from 5700 to 5800. Moreover, towards the end of Run14, from data points 6300 to 6700, the Bi-LSTM model shows significantly less fluctuation compared to the XGBoost model, indicating its enhanced stability under varying conditions. In Figure 20, although the XGBoost model accurately predicts the melt pool lengths for data points from 3400 to 3750, the overall performance of the Bi-LSTM model remains more consistent and aligned with the actual length.
Regarding the LSTM and GRU models, the GRU model exhibits less error in predicting longer melt pool lengths, as evident in Figure 21 for data points from 5700 to 5800. Throughout the remainder of Run14, both models achieve commendable accuracy in fitting the actual length. In Run23, despite both models displaying a similar trend in capturing the actual values, the LSTM model exhibits greater deviations from the actual lengths compared to the GRU model, as illustrated in Figure 22. This result suggests that while the LSTM model is generally reliable, the GRU model may offer better consistency and precision under certain conditions.
The comparison of overall predictions among four algorithms is presented in a scatter plot. Figure 23 demonstrates that the melt pool lengths predicted by the RNN algorithms are more accurate than those predicted by the XGBoost algorithm. Notably, when the melt pool length exceeds 2 mm, predictions from the XGBoost model deviate significantly from the ideal fit, resulting in decreased accuracy. Table 5 summarizes the evaluation and comparative analysis of the melt pool length models. Although the XGBoost algorithm impresses with its computation time and memory usage, its accuracy needs improvement. In terms of RNN algorithms, the GRU and Bi-LSTM models perform better. In particular, the GRU model not only achieves the highest R-square value but also requires the least computation time and memory usage among all RNN algorithms. Compared to the Bi-LSTM model, the GRU model’s computation time and memory usage are lower by 44% and 51%, respectively, making it the most suitable candidate for predicting melt pool length in this research.

3.3.2. Melt Pool Width Model

In the melt pool width model, Figure 24 and Figure 25 illustrate the predictions made by the Bi-LSTM and XGBoost algorithms for Run14 and Run23, respectively. Those scatter plots show a notable variance in accuracy between the algorithms. For Run14, particularly from data point 5500 to 5900, and in Run23 from data point 3100 to 3400, the predictions by the XGBoost model significantly exceed the actual width, highlighting its lower accuracy compared to the Bi-LSTM model. The Bi-LSTM model more consistently aligns with the actual measurements, particularly in complex segments where the melt pool width fluctuates.
Figure 26 and Figure 27 showcase the performance of the LSTM and GRU models for Run14 and Run23, respectively. Both models exhibit similar trends and achieve commendable accuracy in fitting the actual widths in Run14, with the GRU model slightly outperforming the LSTM. Notably, in Run23, while neither model perfectly replicates the fluctuation observed in the actual width measurements, they successfully capture the broader trends. The GRU model consistently demonstrates a slight edge over the LSTM in terms of alignment with the actual data across both runs, indicating its robustness in modeling the melt pool width.
The overall predictive performance of four algorithms is displayed in a scatter plot for comparison, as shown in Figure 28. The RNN algorithms, particularly Bi-LSTM and GRU, exhibit superior performance in predicting sequential data such as melt pool width, evidenced by their close alignment with the ideal fit line. Both models display similar commendable accuracy, effectively capturing the sequential dependencies within the data. In contrast, predictions by the XGBoost algorithm are notably more dispersed, indicating less accuracy. This dispersion becomes especially pronounced when the actual width exceeds 2 mm, where XGBoost predictions significantly deviate from the ideal fit. Table 6 summarizes the comparison among all algorithms, highlighting that the Bi-LSTM model achieves the highest R-square value. However, the GRU model offers comparable accuracy with lower computation time and memory usage—40% and 51% less than the Bi-LSTM model, respectively—demonstrating its greater robustness.

3.3.3. Melt Pool Depth Model

In the melt pool depth model, the XGBoost model displays a surprising parity with the Bi-LSTM model in terms of performance in Run14, especially noticeable at the start where XGBoost surpasses Bi-LSTM in accuracy, as shown in Figure 29. In contrast, during Run23 as depicted in Figure 30, although the overall trends of both models align closely with the actual depth measurements, the XGBoost predictions show greater deviations from the actual values, suggesting less consistency compared to the Bi-LSTM model. This indicates that while XGBoost can match the performance of Bi-LSTM in certain scenarios, its performance can be less reliable in others.
Regarding the LSTM and GRU models, their performance in predicting melt pool depth is commendably consistent, exhibiting similar trends. Both models closely align with the actual values, demonstrating their effectiveness in capturing sequential data characteristics. In Run14, although the predictions start slightly below the actual values, both LSTM and GRU adjust quickly and maintain a good match throughout the data range, as shown in Figure 31. Run23 shows a slight divergence in the predictions from both models, especially in the latter half, where the LSTM model exhibits more deviation than the GRU model, yet both still maintain a general adherence to the trend of actual depth values, as illustrated in Figure 32.
The scatter plots of predictions by all four algorithms are presented in Figure 33. Unlike the melt pool peak temperature and other geometric models, no single model exhibits particularly strong performance. All models deviate from the ideal fit, especially when predicting maximum and minimum melt pool depths. For a more comprehensive comparison and analysis, Table 7 reveals that the XGBoost model has the shortest computation time and lowest memory usage, but relatively lower accuracy. Additionally, the GRU model boasts the highest R-square value and has lower computation time and memory usage—29% and 50% less, respectively, compared to the Bi-LSTM model—highlighting the reliability and robustness of the GRU model.

4. Conclusions

This study developed a recurrent neural network (RNN)-based surrogate model to predict melt pool characteristics, such as peak temperature, length, width, and depth, in directed energy deposition (DED) processes. By integrating a three-level, three-factor design of experiments and multi-physics simulation data into an LSTM, Bi-LSTM, and GRU framework, the model demonstrates exceptional predictive accuracy for sequential melt pool data under varied processing conditions. The research also presents a comprehensive evaluation and comparative analysis of surrogate models built with different algorithms. Key contributions of this research include:
  • Robust Model Architecture: Employed advanced RNN architectures—LSTM, Bi-LSTM, and GRU—to effectively capture the sequential and dynamic behavior of melt pools in DED processes.
  • High Predictive Accuracy: Achieved an R-square of 0.983 for melt pool peak temperature predictions using the Bi-LSTM algorithm. Demonstrated superior performance in melt pool geometry predictions:
    -
    Melt pool length: R-square of 0.903 with the GRU algorithm.
    -
    Melt pool width: R-square of 0.952 with the Bi-LSTM algorithm.
    -
    Melt pool depth: R-square of 0.885 with the GRU algorithm.
  • Efficiency and Robustness: The GRU-based surrogate model outperformed other algorithms in terms of accuracy, computation time, and memory usage, showing a reduction of at least 29% in computation time and 50% in memory usage, highlighting the model’s efficiency and robustness.

Author Contributions

Conceptualization, S.-H.W., U.T., R.J. and M.A.M.; methodology, S.-H.W., U.T. and M.A.M.; software, S.-H.W., U.T. and A.W.M.; validation, S.-H.W. and U.T.; formal analysis, S.-H.W., U.T. and R.J.; investigation, S.-H.W., U.T. and M.A.M.; resources, F.L.; data curation, S.-H.W. and U.T.; writing—original draft preparation, S.-H.W., U.T. and M.A.M.; writing—review and editing, M.A.M. and F.L.; visualization, S.-H.W.; supervision, M.A.M., A.W.M. and F.L.; project administration, F.L.; funding acquisition, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by NSF Grants CMMI 1625736 and NSF EEC 1937128, Product Innovation and Engineering’s NAVAIR SBIR Phase II Contract N6833524C0215, and the Center for Aerospace Manufacturing Technologies (CAMT), Intelligent Systems Center (ISC), and Material Research Center (MRC) at Missouri S&T. We greatly appreciate their financial support.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Svetlizky, D.; Das, M.; Zheng, B.; Vyatskikh, A.L.; Bose, S.; Bandyopadhyay, A.; Schoenung, J.M.; Lavernia, E.J.; Eliaz, N. Directed energy deposition (DED) additive manufacturing: Physical characteristics, defects, challenges and applications. Mater. Today 2021, 49, 271–295. [Google Scholar] [CrossRef]
  2. Xie, J.; Zhou, Y.; Zhou, C.; Li, X.; Chen, Y. Microstructure and mechanical properties of Mg–Li alloys fabricated by wire arc additive manufacturing. J. Mater. Res. Technol. 2024, 29, 3487–3493. [Google Scholar] [CrossRef]
  3. Madhavadas, V.; Srivastava, D.; Chadha, U.; Raj, S.A.; Sultan, M.T.H.; Shahar, F.S.; Shah, A.U.M. A review on metal additive manufacturing for intricately shaped aerospace components. CIRP J. Manuf. Sci. Technol. 2022, 39, 18–36. [Google Scholar] [CrossRef]
  4. Saboori, A.; Aversa, A.; Marchese, G.; Biamino, S.; Lombardi, M.; Fino, P. Application of directed energy deposition-based additive manufacturing in repair. Appl. Sci. 2019, 9, 3316. [Google Scholar] [CrossRef]
  5. Tariq, U.; Wu, S.H.; Mahmood, M.A.; Woodworth, M.M.; Liou, F. Effect of pre-heating on residual stresses and deformation in laser-based directed energy deposition repair: A comparative analysis. Materials 2024, 17, 2179. [Google Scholar] [CrossRef]
  6. Mohd Yusuf, S.; Cutler, S.; Gao, N. The impact of metal additive manufacturing on the aerospace industry. Metals 2019, 9, 1286. [Google Scholar] [CrossRef]
  7. Piscopo, G.; Iuliano, L. Current research and industrial application of laser powder directed energy deposition. Int. J. Adv. Manuf. Technol. 2022, 119, 6893–6917. [Google Scholar] [CrossRef]
  8. Markets, R. Market Opportunities for Directed Energy Deposition Manufacturing. Available online: https://www.researchandmarkets.com/reports/4850372/market-opportunities-for-directed-energy (accessed on 6 June 2024).
  9. Brennan, M.; Keist, J.; Palmer, T. Defects in metal additive manufacturing processes. J. Mater. Eng. Perform. 2021, 30, 4808–4818. [Google Scholar] [CrossRef]
  10. Yuhua, C.; Yuqing, M.; Weiwei, L.; Peng, H. Investigation of welding crack in micro laser welded NiTiNb shape memory alloy and Ti6Al4V alloy dissimilar metals joints. Opt. Laser Technol. 2017, 91, 197–202. [Google Scholar] [CrossRef]
  11. Chen, Y.; Sun, S.; Zhang, T.; Zhou, X.; Li, S. Effects of post-weld heat treatment on the microstructure and mechanical properties of laser-welded NiTi/304SS joint with Ni filler. Mater. Sci. Eng. A 2020, 771, 138545. [Google Scholar] [CrossRef]
  12. Ertay, D.S.; Naiel, M.A.; Vlasea, M.; Fieguth, P. Process performance evaluation and classification via in-situ melt pool monitoring in directed energy deposition. CIRP J. Manuf. Sci. Technol. 2021, 35, 298–314. [Google Scholar] [CrossRef]
  13. Jiang, H.Z.; Li, Z.Y.; Feng, T.; Wu, P.Y.; Chen, Q.S.; Feng, Y.L.; Chen, L.F.; Hou, J.Y.; Xu, H.J. Effect of process parameters on defects, melt pool shape, microstructure, and tensile behavior of 316L stainless steel produced by selective laser melting. Acta Metall. Sin. (English Lett.) 2021, 34, 495–510. [Google Scholar] [CrossRef]
  14. Liu, M.; Kumar, A.; Bukkapatnam, S.; Kuttolamadom, M. A review of the anomalies in directed energy deposition (DED) processes & potential solutions-part quality & defects. Procedia Manuf. 2021, 53, 507–518. [Google Scholar]
  15. Zheng, B.; Haley, J.; Yang, N.; Yee, J.; Terrassa, K.; Zhou, Y.; Lavernia, E.; Schoenung, J. On the evolution of microstructure and defect control in 316L SS components fabricated via directed energy deposition. Mater. Sci. Eng. A 2019, 764, 138243. [Google Scholar] [CrossRef]
  16. Xie, J.; Chen, Y.; Wang, H.; Zhang, T.; Zheng, M.; Wang, S.; Yin, L.; Shen, J.; Oliveira, J. Phase transformation mechanisms of NiTi shape memory alloy during electromagnetic pulse welding of Al/NiTi dissimilar joints. Mater. Sci. Eng. A 2024, 893, 146119. [Google Scholar] [CrossRef]
  17. Mahmood, M.A.; Popescu, A.C.; Oane, M.; Channa, A.; Mihai, S.; Ristoscu, C.; Mihailescu, I.N. Bridging the analytical and artificial neural network models for keyhole formation with experimental verification in laser melting deposition: A novel approach. Results Phys. 2021, 26, 104440. [Google Scholar] [CrossRef]
  18. Wu, Y.; Wu, H.; Zhao, Y.; Jiang, G.; Shi, J.; Guo, C.; Liu, P.; Jin, Z. Metastable structures with composition fluctuation in cuprate superconducting films grown by transient liquid-phase assisted ultra-fast heteroepitaxy. Mater. Today Nano 2023, 24, 100429. [Google Scholar] [CrossRef]
  19. Wu, S.H.; Joy, R.; Tariq, U.; Mahmood, M.A.; Liou, F. Role of In-Situ Monitoring Technique for Digital Twin Development Using Direct Energy Deposition: Melt Pool Dynamics and Thermal Distribution; University of Texas at Austin: Austin, TX, USA, 2023. [Google Scholar]
  20. Yeoh, Y. Decoupling Part Geometry from Microstructure in Directed Energy Deposition Technology: Towards Reliable 3D Printing of Metallic Components. Ph.D. Thesis, Nanyang Technological University, Singapore, 2021. [Google Scholar]
  21. Kistler, N.A.; Corbin, D.J.; Nassar, A.R.; Reutzel, E.W.; Beese, A.M. Effect of processing conditions on the microstructure, porosity, and mechanical properties of Ti-6Al-4V repair fabricated by directed energy deposition. J. Mater. Process. Technol. 2019, 264, 172–181. [Google Scholar] [CrossRef]
  22. Tariq, U.; Joy, R.; Wu, S.H.; Arif Mahmood, M.; Woodworth, M.M.; Liou, F. Optimization of Computational Time for Digital Twin Database in Directed Energy Deposition for Residual Stresses; University of Texas at Austin: Austin, TX, USA, 2023. [Google Scholar]
  23. Hooper, P.A. Melt pool temperature and cooling rates in laser powder bed fusion. Addit. Manuf. 2018, 22, 548–559. [Google Scholar] [CrossRef]
  24. He, W.; Shi, W.; Li, J.; Xie, H. In-situ monitoring and deformation characterization by optical techniques; part I: Laser-aided direct metal deposition for additive manufacturing. Opt. Lasers Eng. 2019, 122, 74–88. [Google Scholar] [CrossRef]
  25. Nuñez, L., III; Sabharwall, P.; van Rooyen, I.J. In situ embedment of type K sheathed thermocouples with directed energy deposition. Int. J. Adv. Manuf. Technol. 2023, 127, 3611–3623. [Google Scholar] [CrossRef]
  26. Zhao, M.; Wei, H.; Mao, Y.; Zhang, C.; Liu, T.; Liao, W. Predictions of Additive Manufacturing Process Parameters and Molten Pool Dimensions with a Physics-Informed Deep Learning Model. Engineering 2023, 23, 181–195. [Google Scholar] [CrossRef]
  27. Wang, Z.; Wang, C.; Zhang, S.; Qiu, L.; Lin, Y.; Tan, J.; Sun, C. Towards high-accuracy axial springback: Mesh-based simulation of metal tube bending via geometry/process-integrated graph neural networks. Expert Syst. Appl. 2024, 255, 124577. [Google Scholar] [CrossRef]
  28. De Borst, R. Challenges in computational materials science: Multiple scales, multi-physics and evolving discontinuities. Comput. Mater. Sci. 2008, 43, 1–15. [Google Scholar] [CrossRef]
  29. Darabi, R.; Azinpour, E.; Reis, A.; de Sa, J.C. Multi-scale multi-physics phase-field coupled thermo-mechanical approach for modeling of powder bed fusion process. Appl. Math. Model. 2023, 122, 572–597. [Google Scholar] [CrossRef]
  30. Zhao, T.; Yan, Z.; Zhang, B.; Zhang, P.; Pan, R.; Yuan, T.; Xiao, J.; Jiang, F.; Wei, H.; Lin, S.; et al. A comprehensive review of process planning and trajectory optimization in arc-based directed energy deposition. J. Manuf. Process. 2024, 119, 235–254. [Google Scholar] [CrossRef]
  31. Bayat, M.; Dong, W.; Thorborg, J.; To, A.C.; Hattel, J.H. A review of multi-scale and multi-physics simulations of metal additive manufacturing processes with focus on modeling strategies. Addit. Manuf. 2021, 47, 102278. [Google Scholar] [CrossRef]
  32. Zhu, Q.; Liu, Z.; Yan, J. Machine learning for metal additive manufacturing: Predicting temperature and melt pool fluid dynamics using physics-informed neural networks. Comput. Mech. 2021, 67, 619–635. [Google Scholar] [CrossRef]
  33. Qi, X.; Chen, G.; Li, Y.; Cheng, X.; Li, C. Applying neural-network-based machine learning to additive manufacturing: Current applications, challenges, and future perspectives. Engineering 2019, 5, 721–729. [Google Scholar] [CrossRef]
  34. Akbari, P.; Ogoke, F.; Kao, N.Y.; Meidani, K.; Yeh, C.Y.; Lee, W.; Farimani, A.B. MeltpoolNet: Melt pool characteristic prediction in Metal Additive Manufacturing using machine learning. Addit. Manuf. 2022, 55, 102817. [Google Scholar] [CrossRef]
  35. Zhu, X.; Jiang, F.; Guo, C.; Wang, Z.; Dong, T.; Li, H. Prediction of melt pool shape in additive manufacturing based on machine learning methods. Opt. Laser Technol. 2023, 159, 108964. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Liu, Z.; Wu, D. Prediction of melt pool temperature in directed energy deposition using machine learning. Addit. Manuf. 2021, 37, 101692. [Google Scholar] [CrossRef]
  37. Jones, K.; Yang, Z.; Yeung, H.; Witherell, P.; Lu, Y. Hybrid modeling of melt pool geometry in additive manufacturing using neural networks. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference; American Society of Mechanical Engineers: New York, NY, USA, 2021; Volume 85376, p. V002T02A031. [Google Scholar]
  38. Mahmood, M.A.; Ishfaq, K.; Khraisheh, M. Inconel-718 processing windows by directed energy deposition: A framework combining computational fluid dynamics and machine learning models with experimental validation. Int. J. Adv. Manuf. Technol. 2024, 130, 3997–4011. [Google Scholar] [CrossRef]
  39. Tariq, U.; Joy, R.; Wu, S.H.; Mahmood, M.A.; Malik, A.W.; Liou, F. A state-of-the-art digital factory integrating digital twin for laser additive and subtractive manufacturing processes. Rapid Prototyp. J. 2023, 29, 2061–2097. [Google Scholar] [CrossRef]
  40. Lu, X.; Lin, X.; Chiumenti, M.; Cervera, M.; Hu, Y.; Ji, X.; Ma, L.; Yang, H.; Huang, W. Residual stress and distortion of rectangular and S-shaped Ti-6Al-4V parts by Directed Energy Deposition: Modelling and experimental calibration. Addit. Manuf. 2019, 26, 166–179. [Google Scholar] [CrossRef]
  41. Newkirk, J. Multi-Layer Laser Metal Deposition Process. Ph.D. Thesis, Missouri University of Science and Technology Rolla, Rolla, MO, USA, 2014. [Google Scholar]
  42. Wu, S.H.; Tariq, U.; Joy, R.; Sparks, T.; Flood, A.; Liou, F. Experimental, computational, and machine learning methods for prediction of residual stresses in laser additive manufacturing: A critical review. Materials 2024, 17, 1498. [Google Scholar] [CrossRef]
  43. Gouge, M.; Michaleris, P.; Denlinger, E.; Irwin, J. The finite element method for the thermo-mechanical modeling of additive manufacturing processes. In Thermo-Mechanical Modeling of Additive Manufacturing; Elsevier: Amsterdam, The Netherlands, 2018; pp. 19–38. [Google Scholar]
  44. Dhieb, N.; Ghazzai, H.; Besbes, H.; Massoud, Y. Extreme gradient boosting machine learning algorithm for safe auto insurance operations. In Proceedings of the 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–5. [Google Scholar]
  45. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
Figure 1. Proposed flow chart of current research.
Figure 1. Proposed flow chart of current research.
Materials 17 04363 g001
Figure 2. Factorial design of experiments.
Figure 2. Factorial design of experiments.
Materials 17 04363 g002
Figure 3. Tool path and simulation setup.
Figure 3. Tool path and simulation setup.
Materials 17 04363 g003
Figure 4. Thermal properties of Ti6Al4V [40].
Figure 4. Thermal properties of Ti6Al4V [40].
Materials 17 04363 g004
Figure 5. Thermal simulation during material deposition of Run27 as shown in (a), maximum temperature value extraction (b) and melt pool dimension (c).
Figure 5. Thermal simulation during material deposition of Run27 as shown in (a), maximum temperature value extraction (b) and melt pool dimension (c).
Materials 17 04363 g005
Figure 6. Architecture of LSTM algorithm.
Figure 6. Architecture of LSTM algorithm.
Materials 17 04363 g006
Figure 7. Series of LSTM architecture.
Figure 7. Series of LSTM architecture.
Materials 17 04363 g007
Figure 8. Architecture of Bi-LSTM algorithm.
Figure 8. Architecture of Bi-LSTM algorithm.
Materials 17 04363 g008
Figure 9. Architecture of GRU algorithm.
Figure 9. Architecture of GRU algorithm.
Materials 17 04363 g009
Figure 10. Training Features of melt pool peak temperature model.
Figure 10. Training Features of melt pool peak temperature model.
Materials 17 04363 g010
Figure 11. Training Label of melt pool peak temperature model.
Figure 11. Training Label of melt pool peak temperature model.
Materials 17 04363 g011
Figure 12. Training label of Run10 in melt pool peak temperature model.
Figure 12. Training label of Run10 in melt pool peak temperature model.
Materials 17 04363 g012
Figure 13. Training features and labels of melt pool dimension model.
Figure 13. Training features and labels of melt pool dimension model.
Materials 17 04363 g013
Figure 14. Run1: actual peak temperature versus prediction by Bi-LSTM and XGBoost.
Figure 14. Run1: actual peak temperature versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g014
Figure 15. Run27: actual peak temperature versus prediction by Bi-LSTM and XGBoost.
Figure 15. Run27: actual peak temperature versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g015
Figure 16. Run1: actual peak temperature versus prediction by LSTM and GRU.
Figure 16. Run1: actual peak temperature versus prediction by LSTM and GRU.
Materials 17 04363 g016
Figure 17. Run27: Actual peak temperature versus prediction by LSTM and GRU.
Figure 17. Run27: Actual peak temperature versus prediction by LSTM and GRU.
Materials 17 04363 g017
Figure 18. Actual peak temperature with predictions from four algorithms.
Figure 18. Actual peak temperature with predictions from four algorithms.
Materials 17 04363 g018
Figure 19. Run14: Actual length versus prediction by Bi-LSTM and XGBoost.
Figure 19. Run14: Actual length versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g019
Figure 20. Run23: Actual length versus prediction by Bi-LSTM and XGBoost.
Figure 20. Run23: Actual length versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g020
Figure 21. Run14: actual length versus prediction by LSTM and GRU.
Figure 21. Run14: actual length versus prediction by LSTM and GRU.
Materials 17 04363 g021
Figure 22. Run23: actual length versus prediction by LSTM and GRU.
Figure 22. Run23: actual length versus prediction by LSTM and GRU.
Materials 17 04363 g022
Figure 23. Actual length versus predictions from four algorithms.
Figure 23. Actual length versus predictions from four algorithms.
Materials 17 04363 g023
Figure 24. Run14: actual width versus prediction by Bi-LSTM and XGBoost.
Figure 24. Run14: actual width versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g024
Figure 25. Run23: actual width versus prediction by Bi-LSTM and XGBoost.
Figure 25. Run23: actual width versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g025
Figure 26. Run14: actual width versus prediction by LSTM and GRU.
Figure 26. Run14: actual width versus prediction by LSTM and GRU.
Materials 17 04363 g026
Figure 27. Run23: actual width versus prediction by LSTM and GRU.
Figure 27. Run23: actual width versus prediction by LSTM and GRU.
Materials 17 04363 g027
Figure 28. Actual width versus predictions from four algorithms.
Figure 28. Actual width versus predictions from four algorithms.
Materials 17 04363 g028
Figure 29. Run14: actual depth versus prediction by Bi-LSTM and XGBoost.
Figure 29. Run14: actual depth versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g029
Figure 30. Run23: actual depth versus prediction by Bi-LSTM and XGBoost.
Figure 30. Run23: actual depth versus prediction by Bi-LSTM and XGBoost.
Materials 17 04363 g030
Figure 31. Run14: actual depth versus prediction by LSTM and GRU.
Figure 31. Run14: actual depth versus prediction by LSTM and GRU.
Materials 17 04363 g031
Figure 32. Run23: actual depth versus prediction by LSTM and GRU.
Figure 32. Run23: actual depth versus prediction by LSTM and GRU.
Materials 17 04363 g032
Figure 33. Actual depth versus predictions from four algorithms.
Figure 33. Actual depth versus predictions from four algorithms.
Materials 17 04363 g033
Table 1. Summary of process parameters.
Table 1. Summary of process parameters.
Process Parameters (Unit)Values
Laser Power (W)600, 800, 1000
Scanning Speed (mm/s)2, 4, 6
Hatching Space (%)40, 50, 60
Laser Beam Size (mm)2
Layer Thickness (mm)0.5
Thermal PropertiesShown in Figure 4
Table 2. The twenty-seven-run design of experiment for multi-physics simulation.
Table 2. The twenty-seven-run design of experiment for multi-physics simulation.
RunLaser Power (W)Scanning Speed (mm/s)Hatch Space (%)
1600260
2600250
3600240
4600460
5600450
6600440
7600660
8600650
9600640
10800260
11800250
12800240
13800460
14800450
15800440
16800660
17800650
18800640
191000260
201000250
211000240
221000460
231000450
241000440
251000660
261000650
271000640
Table 3. Summary of training and testing data of surrogate models.
Table 3. Summary of training and testing data of surrogate models.
ModelTraining DataTesting DataTraining SizeTesting SizeFeaturesLabels
Melt Pool Peak TemperatureRun2-4, Run10-13,
Run15-18, Run24-26
Run1, Run5, Run14,
Run23, Run27
28,68310,184Time,
Position X, Y, Z,
Laser Power,
Scanning Speed,
Hatch Space
Melt Pool Peak Temperature
Melt Pool DimensionRun2-4, Run10-13,
Run15-18, Run24-26
Run1, Run5, Run14,
Run23, Run27
20,1827590Time,
Peak Temperature,
Laser Power,
Scanning Speed,
Hatch Space
Melt Pool Length,
Melt Pool Width,
Melt Pool Depth
Table 4. Evaluation and comparative analysis: melt pool peak temperature model.
Table 4. Evaluation and comparative analysis: melt pool peak temperature model.
AlgorithmsR-SquareRMSEMAEComputation Time (s)Memory Usage (GB)
XGBoost0.8520.05500.038216.670.747
LSTM0.9790.01780.0126238.602.41
Bi-LSTM0.9830.01530.0101290.255.24
GRU0.9780.01790.0129189.302.28
Table 5. Evaluation and comparative analysis: melt pool length model.
Table 5. Evaluation and comparative analysis: melt pool length model.
AlgorithmsR-SquareRMSEMAEComputation Time (s)Memory Usage (GB)
XGBoost0.6980.10310.062916.220.269
LSTM0.8880.05390.041276.231.37
Bi-LSTM0.9020.05010.0369120.552.65
GRU0.9030.05030.038167.751.30
Table 6. Evaluation and comparative analysis: melt pool width model.
Table 6. Evaluation and comparative analysis: melt pool width model.
AlgorithmsR-SquareRMSEMAEComputation Time (s)Memory Usage (GB)
XGBoost0.7520.09630.076216.950.371
LSTM0.9460.04180.031386.261.37
Bi-LSTM0.9520.03990.0293128.702.65
GRU0.9510.040.029176.731.30
Table 7. Evaluation and comparative analysis: melt pool depth model.
Table 7. Evaluation and comparative analysis: melt pool depth model.
AlgorithmsR-SquareRMSEMAEComputation Time (s)Memory Usage (GB)
XGBoost0.7510.08920.055520.200.344
LSTM0.8710.04790.036097.691.44
Bi-LSTM0.8810.04760.0359120.192.72
GRU0.8850.04200.029385.431.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, S.-H.; Tariq, U.; Joy, R.; Mahmood, M.A.; Malik, A.W.; Liou, F. A Robust Recurrent Neural Networks-Based Surrogate Model for Thermal History and Melt Pool Characteristics in Directed Energy Deposition. Materials 2024, 17, 4363. https://doi.org/10.3390/ma17174363

AMA Style

Wu S-H, Tariq U, Joy R, Mahmood MA, Malik AW, Liou F. A Robust Recurrent Neural Networks-Based Surrogate Model for Thermal History and Melt Pool Characteristics in Directed Energy Deposition. Materials. 2024; 17(17):4363. https://doi.org/10.3390/ma17174363

Chicago/Turabian Style

Wu, Sung-Heng, Usman Tariq, Ranjit Joy, Muhammad Arif Mahmood, Asad Waqar Malik, and Frank Liou. 2024. "A Robust Recurrent Neural Networks-Based Surrogate Model for Thermal History and Melt Pool Characteristics in Directed Energy Deposition" Materials 17, no. 17: 4363. https://doi.org/10.3390/ma17174363

APA Style

Wu, S.-H., Tariq, U., Joy, R., Mahmood, M. A., Malik, A. W., & Liou, F. (2024). A Robust Recurrent Neural Networks-Based Surrogate Model for Thermal History and Melt Pool Characteristics in Directed Energy Deposition. Materials, 17(17), 4363. https://doi.org/10.3390/ma17174363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop