Metamodels’ Development for High Pressure Die Casting of Aluminum Alloy

: Simulation is a very useful tool in the design of the part and process conditions of high-pressure die casting (HPDC), due to the intrinsic complexity of this manufacturing process. Usually, physics-based models solved by ﬁnite element or ﬁnite volume methods are used, but their main drawback is the long calculation time. In order to apply optimization strategies in the design process or to implement online predictive systems, faster models are required. One solution is the use of surrogate models, also called metamodels or grey-box models. The novelty of the work presented here lies in the development of several metamodels for the HPDC process. These metamodels are based on a gradient boosting regressor technique and derived from a physics-based ﬁnite element model. The results show that the developed metamodels are able to predict with high accuracy the secondary dendrite arm spacing (SDAS) of the cast parts and, with good accuracy, the misrun risk and the shrinkage level. Results obtained in the predictions of microporosity and macroporosity, eutectic percentage, and grain density were less accurate. The metamodels were very fast (less than 1 s); therefore, they can be used for optimization activities or be integrated into online prediction systems for the HPDC industry. The case study corresponds to several parts of aluminum cast alloys, used in the automotive industry, manufactured by high-pressure die casting in a multicavity mold.


Introduction
A large number of scientific and engineering fields study complex real-world phenomena or solve challenging design problems with simulation techniques [1][2][3][4]. However, in many cases, the computational cost of these simulations makes their use impossible for real-time predictions and limits their application in optimization tasks. The use of machine learning techniques such as neural networks or ensemble methods has become a useful alternative to avoid these limitations.
In this paper, we take advantage of the metamodeling schema to have reliable and fast quality predictions for high-pressure die casting (HPDC). The metamodeling approach has been used in many fields [5][6][7], but for the particular case of HPDC process, the number of works is reduced. The work of Fiorese et al. in [8,9] was focused on the use of new predictive variables derived from plunger movement and the prediction of the ultimate strength of the cast parts. Krimpenis et al. [10] used neural nets and predicted the misrun risks and the solidification time. Finally, Yao et al. [11] based their metamodel on a Gaussian process regression and predicted the temperature at the end of filling.
The novelty of this work is two-fold. From a computational point of view, the novelty lies in the application of a gradient boosting regressor technique to the HPDC process modeling. From a metal casting point of view, the novelty lies in the possibility to predict in real time many aspects of interest for the quality of the manufactured parts, such as the misrun risk, the shrinkage defects, the microporosity and macroporosity, the grain density, the eutectic percentage, and the SDAS. These metamodels make it possible to implement an online predictive system in real time to be used during the manufacturing process and can also be used as a basis for the optimization of the process parameters, shorting the setup of the machine configuration.
Based on their underlying principles and data sources, there are three different types of process modeling schemes: • White-box models: These are rigorous models based on mass and energy balances with the process rate or kinetics equations. They give an almost exact image of the physical laws and the behavior of the given real system. Complete knowledge of the way the system works is needed to develop the model; • Black-box models: These are developed by measuring the inputs and outputs of the system and fitting a linear or nonlinear mathematical function to approximate the operation of the system. In this case, since data from experiments on the real system are used to build the model, we are not given any insight into or understanding of how the system works; • Grey-box models: These are semi-empirical-based or experimentally adjusted models. Grey-box models are developed using white-box models whose parameters are estimated using the measured system inputs and outputs. Some examples are neuro-fuzzy systems or semi-empirical models.
Metamodels, also called surrogate models or response surfaces, are within the "greybox models" group. They are compact and inexpensive to evaluate and have proven to be very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. In this work, the development of certain metamodels for the high-pressure die casting process (HPDC), the most common process to cast automotive parts, is presented.
A metamodel is an approximation of the relationship between the design variables (inputs) and the response functions (outputs), which is implied by an underlying simulation model.
The basic idea is to evaluate a certain problem entity, in this work a high-pressure die casting process. This problem entity can be modeled by some type of simulation model, typically a nonlinear model based on the finite element method (FEM) or on the finite volume method (FVM). However, the main drawback of these type of models is the long time needed to evaluate them. Therefore, a metamodel that can be quickly evaluated is a good option in any processes with a simulation model. An accurate metamodel should be valid with respect to both the finite element model and the process, and if it is, it forms a very useful substitute of them ( Figure 1). The work presented in [12] also explained this concept, but for the particular case of a metal forming process.
In this paper, we apply the metamodeling schema to obtain reliable quality predictions for a HPDC process. HPDC is a cyclic process commonly used to produce complex parts from light alloys, such as aluminum, magnesium, or zinc alloys. This manufacturing process is intended for mass production, due to the high costs associated with the molds' manufacturing; therefore, the main user of this type of process is the automobile industry (80% of the market [13]) followed by the housing industry [14]. New trends in mobility and in particular in the automotive sector are requiring the design of new advanced alloys and their fast implementation in current manufacturing processes. This article is hereafter structured as follows: Section 2 gives some background about the HPDC process and its simulation; Section 3 describes the basis of the methodologies used (the variables' selection, the design of experiments, the numerical simulation, and the metamodeling); Section 4 explains how these methodologies were applied to this case study; Sections 5 and 6 show the results and their discussion, respectively; finally, Section 7 compiles the conclusions of the work performed.

High-Pressure Die Casting Modeling
HPDC, also called simply die casting, is a cyclic process for the mass production of complex parts from light alloys. In this type of manufacturing process, the molten alloy is cast into a metallic mold injected by a piston from a casting chamber, as can be seen in the simple schema shown in Figure 2. More detailed information about this type of manufacturing process can be found in [15,16]. At the beginning of the injection, the piston moves slowly to push the molten alloy to the mold inlet, avoiding the trapping of air; this movement is called the first phase. Then, the piston velocity is suddenly increased to fill the cavity at a high velocity; this movement is known as the second phase. The manufacturing process is fast, with cycle times usually below 2 min. The output product (the casting or cast part) is influenced by a large number of process parameters such as alloy composition, casting temperature, filling time, shot velocity profile, mold geometry, and others. The combination of most of these process parameters determines the flow path of the molten alloy and its solidification, which ultimately define the quality of the product in terms of microstructure, defects, and properties. The high number of parameters involved, together with the complex interactions among them make it very difficult for the people involved in the part and process design to predict the final quality of the cast part using only their knowledge of the process. For this reason, simulation is a very good tool that helps to advance the design process with more confidence.
The numerical simulation (white model approach) of metal casting processes is, currently, a well-known technology. At the industrial level, its use by means of commercial codes based on the FEM or on the FVM is quite widespread, although in general, it is restricted to solving the thermal flow problem. It allows predicting the risk of misruns and shrinkage defects [17,18], which are usually the main interests of the foundryman. At the research level, numerical simulation is also used to make metallurgical predictions, such as the different phases expected in the microstructure or even aspects related to the grain growth [19,20]. These numerical simulation models solve the equations that represent the physical laws that govern the real process, in the case of HPDC, mainly heat transfer and fluid dynamics. For this reason, they are also known as physics-based models. They allow obtaining very explanatory predictions with a good level of precision, which can be observed graphically, such as, for example, the flow path predicted for the molten alloy during the cavity filling, the temperature distributions, the solidification progress, etc., as is shown in the example of Figure 3. The number of published works focused on the numerical simulation of the HPDC process is rather large, and some examples are [21][22][23][24][25][26]. The main drawback of these types of models, as mentioned above, is the high CPU requirements needed to solve them. This entails long calculation times, which can last several hours or even several days. Although different approaches have been used to reduce the calculation times [27][28][29][30], the long response time continues to hinder their use, for example, for the application of optimization strategies. This also makes it impossible to implement them in online prediction systems connected to the manufacturing process in real time.
One solution to achieve a rapid response in the predictions of the HPDC is the use of metamodels (grey-box approach) using machine learning techniques. This type of model is generally as accurate as the quality of the data and tends to have a narrower view of the process compared with physics-based (white-box) models, but they are much faster, which makes it possible to use them in optimization activities or in online predictions. In the case of HPDC, deriving the grey-box model with machine learning techniques from a physics-based model instead of from actual experimental data has some drawbacks and advantages. The main drawback is the possible loss of accuracy. Whereas it is assumed that the experimental data perfectly reflect the actual situation, the use of the simulation results can imply a loss of accuracy, since the physics-based model will never be completely accurate [31]. The main advantage is the greater flexibility to select different input data if they are required during the grey-box model development. A supplementary advantage is a significant cost savings.
Considering these advantages, in the work presented here, we decided to develop a metamodel of the HPDC process, from a physics-based model. The methodology followed to develop this metamodel, capable of predicting the misrun risk, the shrinkage, the porosity level, the microstructure, and the grain density of the manufactured parts, is explained together with the results obtained. The case study corresponds to several parts manufactured in AlSi alloys in a multicavity mold.

Methodology
The basic methodology followed in the metamodel development is summarized in the flow diagram shown in Figure 4. The different steps are explained in detail in the following sections.

Variables Selection
The data needed to develop the metamodel are divided into explanatory and response variables. A supervised selection was performed for both types of variables, based on bibliography sources [32][33][34] and on the knowledge of the research team in the physics that governs the HPDC process [35,36]. All selected variables are continuous.
First, the response variables, also called dependent variables, were selected. The response variables are the values that the metamodel must predict. They correspond to several key performance indicators (KPIs) representative of the part quality, and they were selected taking into account their interest for the metal casting industry, their influence on the part performance, and the data availability. Their values were obtained as a result of the physics-based model.
Next, the selection of the explanatory variables, also called independent variables, was performed. The criteria followed for this selection were, first, their effect on the selected KPIs, that is their predictive ability. Second was the possibility of measuring these variables during experimental tests. This point is very important to allow the posterior adjustment of the model against experimental measurements, and it is imperative to use the metamodel to make online predictions. In addition, the range considered for each explanatory variable was selected to define the limits of the continuous field where it could vary.
The number of selected explanatory variables was reduced as much as possible. The criterion followed was to select the minimum number of variables necessary to carry out the prediction with the desired level of precision. The reason is that, generally, the greater the number of variables included in the metamodel, the greater the number of instances necessary to develop it is. Although this is greatly related to the type of regression model performed, the one-in-ten rule, which recommends a minimum of ten instances for each independent variable, is commonly mentioned as a rule of thumb [37]. In this case, each instance is one simulation of the case study under different specific process conditions. Therefore, if the number of explanatory variables is very high, the number of simulations to be performed will be very large and the time required will be very long. The main drawback of using a small number of explanatory variables is the risk of omitting some variables whose effect is significant in the prediction. However, this also has some additional advantages, such as requiring a smaller number of sensors if the metamodel is used in an online predictor.

Design of Experiments
Once having defined the limits of the continuous field, where the explanatory variables can vary, the second step is to carry out a design of experiments to determine the test cases, that is the numerical simulations to be carried out. The idea is to have the minimum number of cases necessary to cover the whole field where the process conditions may vary. As in this case, all the variables used are continuous, it is also necessary to define the number of levels for each variable, that is the points to study in the variable range.
Different techniques can be used to define the tests to be performed. In a complete factorial design, all possible combinations of the levels defined for the factors (variables) are studied. This type of design accentuates the factor effects, allows the estimation of the interactions among them, and permits a good coverage of the whole field of the boundary conditions. However, the drawback is that the resultant number of tests is high. One alternative to the full factorial design is the use of reduced designs such as the central composite design (CCD) or Box-Behnken designs. These approximations attempt to map the relationship between the response and the factor settings, minimizing the number of trials. For more detailed information about these types of designs, Reference [38] can be consulted.

Numerical Simulations: Obtaining the Data
The numerical simulations (physics-based models or white-box models) corresponding to each of the case studies identified in the DoE were set up and run.
The main governing equations of the physics involved in HPDC models at the macroscale level are the conservation of energy (1), the mass conservation or continuity Equation (2), and the Boussinesq form of the Navier-Stokes equation for incompressible Newtonian fluids (3), where ρ represents density, h enthalpy, t time, ν velocity, k thermal conductivity, T temperature,Ṙ q heat generation per unit mass, ρ 0 density at the reference temperature and pressure,p modified pressure, µ l shear viscosity, g gravity, and β T the volumetric thermal expansion coefficient. In addition, some additional equations were used to calculate aspects related to the microstructure and specific algorithms for defect prediction . More detailed information about these topics can be found in [39,40].
These physics-based models were set up and solved with ProCAST, a finite element software specially focused on the simulation of metal casting processes. Actually, it uses a variation of the finite element method called the edge-based finite element method (EBFEM), which is a more robust discretization method for fluid flow problems. In addition, it uses a coupled finite element-cellular automaton model for the prediction of grain structure solidification [41].
To set up the model, first, the mold geometry was drawn with a CAD system, then it was discretized by means of a finite element mesh, and the material properties and boundary conditions were applied. Finally, the model was solved, as can be seen in Figure 5. The numerical simulation of HPDC was very complex and very demanding in terms of the CPU. On the one hand, the flow velocity is very high during mold filling. In fact, the filling time of the cavities is measured in milliseconds. Moreover, the alloy cooling rates are also very high, easily exceeding values of 20 C/s. All this hinders the convergence of the numerical solution, requiring the use of short time steps that extend the CPU times. On the other hand, the manufacturing process is a continuous cycle procedure. Thus, the mold temperature is not constant throughout the cycle, and the temperature distribution is not uniform throughout the mold, since it depends on the cavities' geometry, the presence of cooling or heating systems, and the cycle times. Therefore, the simulation of the thermal behavior of the mold during several cycles until reaching the thermal stabilization is required before performing the simulation of the filling and solidification process.
Of course, when the simulation is not limited to the thermal flow problem, additional complex calculations must be performed to obtain complementary results, such as the microstructure, grain density, or porosity caused by air entrapment, which lengthen the CPU times even more.
These facts explain the interest in reducing the number of tests as much as possible.

Metamodel Development: Regression Model
There are many machine learning techniques commonly used in the literature to perform a supervised regression model, which is the underlying concept in a metamodel strategy to predict any continuous variable. Some of those techniques are neural networks [42], kriging or the Gaussian process [43], ensemble methods such as random forest or gradient boosting [44,45], etc.
For this study, we did not analyze all the strategies named before, but only the gradient boosting algorithm, which is a very competitive approach in terms of accuracy in this type of regression problem, as pointed out in [46]. Two other reasons for selecting this technique are the power to prevent overfitting, which is very interesting in problems with a small amount of data, and last, but not least, the setup of this approach is not so time consuming as in kriging or neural networks.
The main idea of the boosting method is to add models to the set sequentially. In each iteration, a "weak" model (base learner) is trained with respect to the total error of the set generated up to that moment.
To overcome the overtraining problem of this type of algorithm, a statistical connection was established with the gradient descent formulation to find local minima.
In gradient boosting models (GBMs), the learning procedure is consecutively adjusted to new models to provide a more accurate estimate of the response variable. The main idea behind this algorithm is to build the new base models so that they correlate as much as possible to the negative gradient of the loss function, associated with the whole set. The loss functions applied can be arbitrary, but to give better intuition, if the error function is the classic quadratic error, the learning procedure would result in a consecutive error adjustment. In general, the choice of the loss function depends on the problem, with a great variety of loss functions implemented up to now and with the possibility of implementing a specific one for the task. Normally, the choice of one of them is a consequence of trial and error.
Therefore, the objective of this model is to minimize the loss function ψ(y, f ) shown in (4) With regard to the loss function, the minimum absolute error or L 1 − loss is usually used, considered one of the most robust, but also the minimum square error is also considered.
To summarize, the complete gradient boosting algorithm can be formulated as originally proposed by Friedman [45]. A computationally flexible way of capturing the interactions among the variables in a GBM is based on the use of regression trees. These models usually have the tree depth and divisions as the parameters, but the most important parameter in a GBM is the number of trees generated (learners). Therefore, an analysis of the number of learners for each case is needed to have the most reliable parametrization. In Figure 6, an analysis of the number of learners versus the error of the whole GBM is shown for the misrun prediction of Part Numbers 2, 3, and 4.

Case Study and Variables' Selection
The case study corresponded to several parts cast in one multicavity mold by means of the HPDC process. The mold has four cavities, as can be seen in Figure 7, to manufacture four parts each time. The alloy used is an AlSi alloy, common in the HPDC manufacturing and for automotive structural components. Part Number 1 was designed to evaluate the castability of the alloy, that is its ability to fill the entire cavity of the part; Part Numbers 2 and 3 are specimens for tensile tests; and Part Number 4 is a flat plate designed to evaluate the porosity level and the microstructure.
As introduced in previous sections, the CPU times needed to obtain the simulation results were very long. In the tests conducted for this case study, the CPU times varied between 5 h and 37 h for each simulation. The reasons for these variations in CPU times were related to the convergence difficulties found for the different cases, due to the different boundary conditions of each of them (mold and alloy temperatures and velocity values). As explained, we tried to reduce the number of simulations as much as possible due to the long calculation times, and therefore, a small number of explanatory variables were selected. The selected explanatory variables are shown in Table 1 and the range of variation considered for each of them in Table 2. The range of variation selected was extreme in order to cover a broad framework of manufacturing conditions.

Explanatory Variable Details
Mold temperature Average temperature of both cavity units (fixed and mobile) at the beginning of each injection Alloy temperature Temperature of the alloy at the start of each injection Phase 1 velocity Average piston velocity during the first phase of the alloy injection Phase 2 velocity Maximum piston velocity reached during the second phase of the alloy injection The response variables, shown in Table 3, were selected taking into account their interest for the metal casting industry due to their influence on the part performance.

Design of Experiments
To obtain a reduced number of tests, a Box-Behnken approach was selected for the DoE. The four explanatory variables (two temperatures and two velocities) were included considering two levels for each of them (the minimum and the maximum value of their range). Thus, a design with four central points for each block was made.

Response Variable Details
Misrun risk Qualitative variable that represents the risk of the part to remain unfilled in some areas; it is a continuous variable whose value goes from 0 to 3 (0 = no risk; 3 = very high risk) Shrinkage Continuous variable that predicts shrinkage defects, that is porosity caused by the alloy contraction during solidification; it is a continuous variable measured in % Microporosity Continuous variable that predicts microporosity defects, that is porosity lower than 0.1% caused mainly by air entrapment during filling and solidification; it is a continuous variable measured in % Macroporosity Continuous variable that predicts macroporosity defects, that is porosity greater than 0.1% caused by the combination of air entrapment and shrinkage; it is a continuous variable measured in % Grain density Continuous variable that predicts the grain density measured in number of grains per cm 2 Eutectic Continuous variable that predicts the amount of eutectic phase measured in %; the rest of the microstructure will be the fcc phase SDAS Continuous variable that predicts the secondary dendrite arm spacing (SDAS) of the microstructure measured in µm In this way, a Box-Behnken design consisting of 36 experiments that combine different values for the variables was obtained. As this type of approach is generally used for the design of experiments tests, some of the test were repeated to verify the variability of the experiment under the same conditions. As the numerical simulation was deterministic, the repeated execution of the same case did not make sense. For this reason, in the repeated cases, some variables were modified randomly within the defined range for each of them, obtaining new combinations of variables. Figure 8 shows the combination of variables corresponding to the 36 experiments or instances obtained from the DoE.

Numerical Simulation
The numerical simulations corresponding to each test case defined by the DoE were carried out.
The two cavity units, located in the fixed and mobile plates, respectively, were drawn with a CAD system together with the piston and its sleeve. This set was discretized with a finite element mesh formed by 468,436 nodes and 2,285,758 elements.
The material properties of each volume of this set were assigned: H13 steel for the cavity units (called the mold) and AlSi for the alloy. In relation to the boundary conditions, the mold and the alloy temperatures were defined. The heat transfer coefficients (HTCs) were assigned between the different volumes (for example, between the alloy and the mold or between to the two halves of the mold). These HTCs took into account the cycle phases. For example, when the mold was closed, the two halves were in contact, and the heat transfer between them was determined by the corresponding HTC; however, when the mold was open, there was no contact between the halves, so there was no heat transfer between them. Convection and radiation heat transfer conditions were also applied on the volumes' surfaces to model the cooling when the mold was open. Air venting conditions were applied to model the air that must come out from the cavities. The piston movement was also defined. In addition to these basic boundary conditions, volume nucleation conditions were applied to perform the grain growth calculations. Finally, several run parameters specific to the software used were set up to define aspects such as the type of calculations (thermal, flow, microstructure, etc.) and the details of each of them. Figure 9 shows a scheme with some of the boundary conditions applied in the model. The assignment of the boundary conditions corresponding to the selected explanatory variables is straightforward for the piston velocities and the alloy temperature, but not for the mold temperature. As introduced, the cyclic nature of the HPDC process causes a nonuniform temperature distribution throughout the mold. The mold temperature values considered in the DoE corresponded to the average mold temperature at the beginning of each cycle. For this reason, different simulations were performed with different temperature alloys and cycle times until reaching mold temperature distributions whose average value corresponded to the values considered in the DoE.
The models corresponding to each test case defined by the DoE were run and the results corresponding to the selected response variables extracted. The results provided by these physic-based models are mainly provided graphically, as can be seen in the examples shown in Figure 10. For this reason, these results were postprocessed to obtain a numerical average value representative of each result for each part. In addition, several parts were manufactured and subjected to metallographic studies to analyze the microstructure, to tensile tests to evaluate the mechanical properties, and to tomographies to study the porosity level ( Figure 11).

Metamodel Development
A preliminary analysis of the data showed that there were no clear correlations between the explanatory variables (inputs) and response variables (targets). In Figure 12, the Pearson correlation shows that in most of the cases, a univariate analysis or prediction could not be performed. Only in the case of the SDAS, a further analysis regarding the temperature of the mold could be performed; see Figure 13. In this analysis, it could be seen that a univariate regression model could give a good result, but we show that a GBR outperformed the univariate analysis.  As mentioned before, the basis of the metamodel as a regression technique is a GBR, and the most important parameter to configure the GBR algorithm is the number of estimators created in the model. However, other parameters must also be considered to set up the metamodel to have a reliable prediction.
According to the implementation used, the GBR has two groups of parameters: boosting parameters and tree-specific parameters. In this study, regarding the boosting parameters, we analyzed the number of estimators, the learning rate, and the subsample, as well as the max depth of the trees: • Number of estimators: This is the number of boosting stages to perform. Gradient boosting is robust to overfitting, so a large number usually results in better performance. However, although the GBR is robust at a higher number of trees, it can still overfit at a point. Hence, this should be tuned; • Learning rate: This parameter determines the impact of each tree on the outcome. The GBR works by starting with an initial estimate, which is updated using the output of each tree. The learning parameter controls the magnitude of this change in the estimates. Lower values are generally preferred as they make the model robust to the specific characteristics of the tree and thus allow generalizing well. Lower values would require a higher number of trees to model all the relations and would be computationally expensive; • Subsample: The fraction of observations to be selected for each tree considering that the selection is performed by random sampling. Values slightly less than 1 make the model robust by reducing the variance, and typical values around 0.8 generally perform well but can be tuned further; • Max depth: This is the maximum depth of a tree, used to control overfitting as a higher depth will allow the model to learn relations very specific for a sample.
In Figure 14 can be seen an exhaustive search of the number of estimators based on the reliability of the prediction with a Monte Carlo analysis, so in the figures, the dispersion for the targets is presented in a boxplot representation, in order to find the best configuration separately in each model. Figure 14. Boxplot analysis, which shows the dispersion of the "estimator" hyperparameter of the GBR algorithm in a Monte Carlo analysis between 1 and 200 estimators for each response variable.
In the case of misrun and shrinkage, it is difficult to make a conclusion about the best number of estimators to set in the GBR; however, the porosity models have a better performance with the number of estimators being around 25. Although the grain density and eutectic models have a clear compromise in this parameter, the scores of these models were too low to make clear conclusions.
With this first analysis, we had an idea about the performance and the score of the models. For further analysis, the best number of estimators for each model was used. The same approach was used in the rest of the important parameters for the GBM mentioned above.

Results
This section summarizes the values predicted by the metamodel developed for each type of prediction (misrun, shrinkage, etc.) and for each part. The performance of the model was evaluated by comparing the predicted values with the reference values (those predicted by the physics-based models) using the R 2 score and the NMAE, defined in the coming lines. Table 4 compiles these values for each type of result averaged for the different parts. The R 2 score, calculated following (6), is the coefficient of determination, used when comparing different predictive models. It is the amount of variation in the target or response variable, the best possible score being one.
The mean absolute error (MAE) is a measure of the prediction error. In this case, the normalization of the error (NMAE), calculated following (5), is presented to also have a comparison criterion.
In both cases, N is the number of predictions, y the real measurements, y the predictions of the model where for each i, 1 ≤ i ≤ N, σ y the standard deviation of a variable y, and ∆ is the range of the measured data.
The following figures show scatter plots with predictions versus measured data points with the linear regression line and a 95% confidence interval for that regression, only for the best result for each case. This shows a linear relationship between the real data and the predicted; thus, the best performance is the diagonal for each case (Figures 15-17).

Discussion of the Results
The predictions made by the metamodels for the misrun and shrinkage results were considered very good, with an average R 2 higher than 0.7 and an average NMAE of 7%, which can also be observed in Figure 15. The misrun risk and the shrinkage porosity are usually the main values of interest for the foundryman. In fact, at the industrial level, it is usual for predictions to be limited to these two values. Therefore, from this point of view, the good prediction obtained for these values is satisfactory.
The predictions obtained for the SDAS were the best results obtained in this study, with an average NMAE of 4% and R 2 of 0.94, which are very good results in terms of reliability (see also Figure 16). This may be due to the large influence of mold temperature, as can be seen in Figure 13. The SDAS value has a significant impact on the mechanical properties of the cast products (yield strength, ultimate tensile strength and elongation) and also on the corrosion resistance. Therefore, the good prediction of the SDAS is also very interesting.
The predictions obtained for microporosity were not as good; this is clear in the R 2 , although in terms of the NMAE (9%), the predictions showed a moderate reliability. The rest of the predictions (macroporosity, eutectic, grain density) were worse with average NMAE values clearly above 10% and R 2 around 0.5 or less. The reasons for obtaining these results are difficult to determine. One of the hypotheses may be the strong nonlinearities that were created in the simulations due to complexity of the physical phenomena; in this way, the model was not able to identify the relationships. In fact, the microporosity and macroporosity, the eutectic percentage, and the grain density are aspects much less studied by simulation codes than the misrun or the shrinkage. It would be interesting to be able to improve the performance of the metamodels for these response variables in future works, although these types of results are generally simulated scientifically and not at the industrial level. Taking into account the complexity of predicting some response variables, more data would be needed to establish a reliable relationship with the input parameters. In some of the regression models, other variables such as certain material properties could also be taken into account.
Compared with other alternatives, the approach taken in this work was very competitive [10]. It must be taken into account that there are few works that build metamodels, as mentioned in Section 1, for the HPDC process, and these works generate models that are difficult to compare.
Finally, another important aspect besides reliability is the computational time, especially compared with FEM simulations. The CPU time needed to make the predictions was very fast (milliseconds); therefore, it can be applied in optimization activities or to online prediction systems.

Conclusions
Different metamodels of the HPDC process of aluminum parts were successfully developed based on a gradient boosting regressor. The best results were reached for the prediction of the SDAS, NMAE = 4%, R 2 = 0.94, a microstructure parameter closely related to the mechanical properties of the cast products.
The metamodels predicted with good precision (NMAE = 7%, R 2 = 0.7) the results of greatest interest for the metal casting industry, the misrun risk and the shrinkage level of the parts.
The rest of the predictions (microporosity and macroporosity, eutectic percentage and grain density) were less precise.
The main highlight of the improved metamodel is its ability to reduce the time needed to obtain the predictions, without an important loss of accuracy compared with a more conventional FEM simulation of the HPDC process. This implies that the metamodel approximation may be a good solution to perform real-time predictions in this manufacturing process.