Identification of Sheet Metal Constitutive Parameters Using Metamodeling of the Biaxial Tensile Test on a Cruciform Specimen

: An identification strategy based on a machine learning approach is proposed to identify the constitutive parameters of metal sheets. The main novelty lies in the use of Gaussian Process Regression with the objective of identifying the constitutive parameters of metal sheets from the biaxial tensile test results on a cruciform specimen. The metamodel is intended to identify the constitutive parameters of the work hardening law and yield criterion. The metamodel used as input data the forces along both arms of the cruciform specimen and the strains measured for a given set of points. The identification strategy was tested for a wide range of virtual materials, and it was concluded that the strategy is able to identify the constitutive parameter with a relative error below to 1%. Afterwards, an uncertainty analysis is conducted by introducing noise to the force and strain measurements. The optimal strategy is able to identify the constitutive parameters with errors inferior to 6% in the description of the hardening, anisotropy coefficients and yield stresses in the presence of noise. The study emphasizes that the main strength of the proposed strategy relies on the judicious selection of critical areas for strain measurement, thereby increasing the accuracy and reliability of the identification process.


Introduction
Sheet metal forming processes are broadly used in the automotive, aerospace and metalworking industries.In these industries, the production costs and the quality of the end product are key aspects to being competitive [1].The development and optimization of sheet metal forming processes usually resort to the use of finite element analysis (FEA) [2,3].In FEA, the mechanical behaviour of metallic materials is described using mathematical models, known as constitutive laws.It is commonly assumed that the plastic behaviour of sheet metals is described by an orthotropic yield criterion [4], which represents the yield surface in the stress space, and by hardening laws that express the yield surface evolution during plastic deformation.Identifying the constitutive parameters is therefore a fundamental step in correctly modelling the mechanical behaviour of the material and, thus, in the development and optimization of sheet metal forming processes.
Constitutive parameters are commonly identified by using conventional tests, such as the uniaxial tensile test, the shear test [5][6][7] and the hydraulic bulge test [8][9][10].These tests are characterized by homogeneous stress/strain fields, which allow the constitutive Metals 2024, 14, 212 2 of 19 parameters to be analytically identified [11].However, the absence of heterogeneity in the stress/strain states dictates the use of multiple mechanical tests to fully characterize the mechanical behaviour of the material [12,13].To address this issue, several researchers have developed heterogeneous tests to characterize the material's behaviour [14][15][16][17][18][19].In [14], the authors designed a virtual heterogeneous test to characterize the mechanical behaviour of thin sheets using finite element simulations with an anisotropic yield criterion.The design process involved optimizing the specimen shape and boundary conditions based on a quantitative indicator, ultimately resulting in a heterogeneous test with a butterfly shape and strain states ranging from simple shear to plane-strain tension.The identified material parameters were validated by comparison with an experimental database of quasi-homogeneous classical tests, demonstrating the reliability of the proposed approach.Otherwise, in [15], the authors focused on identifying the parameters of the Swift hardening law for a dual-phase steel using an optimally designed notched specimen and the Finite Element Model Updating (FEMU) technique.This involved an iterative comparison between DIC measured material data and numerically simulated results.
The biaxial tensile test on specimens with a cruciform geometry has been the subject of growing interest [20][21][22][23] due to its potential to identify the constitutive parameters using a single test.For instance, in [19], an inverse analysis methodology for determining plastic constitutive model parameters in biaxial tensile tests of metal sheets was implemented through the use of finite element simulations and comparison with experimental data.It efficiently identified yield criterion and work-hardening law parameters, providing a precise alternative to time-consuming and uncertain traditional strategies.Biaxial tensile tests on specimens specifically designed to obtain heterogeneous stress/strain fields and a high sensitivity to the anisotropy of the material have been successfully applied in inverse analysis methods [12,[24][25][26][27].However, the application of inverse analysis methods is typically associated with significant computational and time costs, which are necessary to compute the sensitivity matrix through numerical simulations of the mechanical test used [28].
Metamodeling is an alternative approach based on the construction of mathematical models that make it possible to simulate the behaviour of complex and expensive systems.In this sense, metamodeling can be used to identify constitutive parameters by establishing complex mathematical relationships between them and the results of mechanical tests [29,30].Establishing these relationships requires a large set of mechanical test data, which can be obtained, for example, through FEA.There are currently multiple metamodeling techniques, where the results and computational costs vary [31].Although machine learning techniques generally demand substantial datasets for training, once trained, the parameters identification is straightforward and does not require additional numerical simulations.Several metamodeling techniques have been used to identify material parameters using homogeneous tests (e.g., uniaxial tensile [32], bending [33], and bulge [34]).This works focused on use of neural networks [33][34][35] and kriging [32] to build the metamodels.Although neural networks are commonly used to identify material parameters [36][37][38], in recent years Gaussian Process Regression (GPR) has emerged as a powerful tool in machine learning, particularly in regression tasks where uncertainty estimation is critical.Gaussian Process Regression is a non-parametric and flexible approach that provides a framework capable of dealing with complex and nonlinear relationships in data [39].Nevertheless, to the best of our knowledge, the use of Gaussian Processes (GP) to identify parameters in metal sheets is unexplored in the literature.Therefore, this work aims to explore the application of Gaussian Process Regression to promptly identify the constitutive parameters from biaxial tests, without the need for additional numerical simulations (i.e., after training), thus reducing the computational cost.Three different strategy designs are proposed, each using different datasets (different zones of strain measurement).This diversity of data allows an examination of the impact of the appropriate choice of the strain measuring regions.A noise analysis is then performed to simulate the inevitable variability of experimental measurements and test the robustness of the different strategy designs.

Numerical Model
Different geometries of the specimen used in the biaxial tensile test have been developed in order to obtain an optimized and adaptable geometry to different purposes [17,19,40].For example, in [40], using a parametric finite element model the shape of the cruciform specimen for biaxial loading was optimized demonstrating improved performance, mitigating premature failure and strain field heterogeneities.In this work, the geometry proposed in [19] is used.This geometry was numerically optimized to maximize the sensitivity of the test results to the constitutive parameters.The specimen used has a thickness of 0.5 mm and the rolling direction is aligned with the 0x axis.The remaining dimensions are shown in Figure 1a.The biaxial test is performed by submitting the same displacement along the 0x and 0y axis.The displacement on the 0x and 0y axes are evaluated at points A and B, respectively.The biaxial test is performed until a displacement of 2 mm is reached.
In the numerical simulation only one eighth of the specimen is considered, due to the symmetry conditions in the test geometry and the orthotropy of the sheet.The numerical model was discretized with 6680 8-node hexahedron solid finite elements (as shown in Figure 1b), each element has approximately 0.52 mm in length) combined with a selective reduced integration technique DD3IMP in-house finite element code, which uses an updated Lagrangian scheme to integrate the constitutive law in an implicit way [41][42][43].The numerical simulations were performed with an Intel ® Core™ i9-7900X Deca-Core processor (4.3 GHz).On average, each numerical simulation was carried out in 53 s.

Numerical Model
Different geometries of the specimen used in the biaxial tensile test have been developed in order to obtain an optimized and adaptable geometry to different purposes [17,19,40].For example, in [40], using a parametric finite element model the shape of the cruciform specimen for biaxial loading was optimized demonstrating improved performance, mitigating premature failure and strain field heterogeneities.In this work, the geometry proposed in [19] is used.This geometry was numerically optimized to maximize the sensitivity of the test results to the constitutive parameters.The specimen used has a thickness of 0.5 mm and the rolling direction is aligned with the 0x axis.The remaining dimensions are shown in Figure 1a.The biaxial test is performed by submitting the same displacement along the 0x and 0y axis.The displacement on the 0x and 0y axes are evaluated at points A and B, respectively.The biaxial test is performed until a displacement of 2 mm is reached.
In the numerical simulation only one eighth of the specimen is considered, due to the symmetry conditions in the test geometry and the orthotropy of the sheet.The numerica model was discretized with 6680 8-node hexahedron solid finite elements (as shown in Figure 1b), each element has approximately 0.52 mm in length) combined with a selective reduced integration technique DD3IMP in-house finite element code, which uses an updated Lagrangian scheme to integrate the constitutive law in an implicit way [41][42][43].The numerical simulations were performed with an Intel ® Core™ i9-7900X Deca-Core processor (4.3 GHz).On average, each numerical simulation was carried out in 53 s.The elastoplastic constitutive model used to describe the mechanical behaviour of the metal sheet assumes: an isotropic elastic behaviour defined by the generalized Hooke's law, and a plastic behaviour described by the Hill'48 yield criterion [6] and by the Swift work hardening law [45].The Hill' 48 yield criterion is written as follows: where  is the yield stress , , , ,  and  are the parameters that define the shape of the yield surface, and  ,  ,  ,  ,  ,  are components of the Cauchy stress tensor, written in the orthotropic coordinate system 0xyz.In this work, it is assumed that   1.5 (identical to von Mises) and the condition   1, meaning that the The elastoplastic constitutive model used to describe the mechanical behaviour of the metal sheet assumes: an isotropic elastic behaviour defined by the generalized Hooke's law, and a plastic behaviour described by the Hill'48 yield criterion [6] and by the Swift work hardening law [45].The Hill' 48 yield criterion is written as follows: where Y is the yield stress F, G, H, L, M and N are the parameters that define the shape of the yield surface, and σ xx , σ yy , σ zz , τ xy , τ xz , τ yz are components of the Cauchy stress tensor, written in the orthotropic coordinate system 0 xyz .In this work, it is assumed that L = M = 1.5 (identical to von Mises) and the condition G + H = 1, meaning that the yield stress, Y, is comparable to the uniaxial tensile stress aligned with the rolling direction.
The anisotropy coefficients for 0 • , 45 • and 90 • (relative to the rolling direction), respectively denoted by r 0 , r 45 and r 90 , can be determined by following the equations: The yield stress, Y, evolution with plastic deformation is described by Swift hardening law: where C, ε 0 and n are material constants, and ε p represents the equivalent plastic strain.The initial yield stress, Y 0 , is given by:

Identification Strategy
The proposed identification strategy is based on a machine learning approach in which one or more metamodels are built to establish a relationship between the results of the biaxial tensile test (inputs) and the constitutive parameters of metal sheets (outputs).The input data consist of the strains (ε xx , ε yy , ε zz and ε xy ) and the forces in both cruciform arms, along 0 x and 0 y (see Figure 1).These values are measured every 0.2 mm, up to a displacement of 2 mm, resulting in a maximum of 10 force values along 0 x , 10 force values along 0 y and 40 strain values per node.The output data of the metamodel(s) will be the parameters of Swift's law, Y 0 , C and n, and the parameters of the Hill'48 yield criterion, F, G and N.

Strategy Design
In the initial phase of the analysis, three different approaches to the identification of constitutive parameters were considered.The first conceptualization, referred to as the "Multiple Datasets" design, involves the use of six different metamodels, each specifically dedicated to the identification of a constitutive parameter.The essence of this approach is the use of one metamodel for each parameter, six in total.Within these metamodels, strains are measured at 16 strategically selected nodes.The position of these nodes is carefully tailored to the parameter under consideration.The node selection process is underpinned by variance-based sensitivity analysis, specifically Sobol indices, previously performed to assess the effect of material parameters on biaxial test results using a cruciform specimen [44].In this sensitivity analysis, Sobol indices were evaluated for all nodes in the numerical simulation, allowing the identification of regions within the cruciform specimen, where strain values are most influenced by specific constitutive parameters.Consequently, for each parameter, a set of 16 nodes with the highest Sobol indices were selected for strain evaluation.To provide a visual representation of this selection process, Figure 2 shows the spatial distribution of the 16 most sensitive strain nodes for each constitutive parameter.The precision of the node selection guided by the sensitivity analysis increases the reliability and accuracy of the subsequent metamodels in capturing the intricate relationships between material parameters and resulting strains.In the second design, referred to as "Unique Dataset", one single metamodel is e ployed to simultaneously identify all six constitutive parameters.To maintain impartial with the first strategy design (same number of nodes per parameter), the strains are me ured in 96 nodes to ensure a similar input dimension/information.The 96 nodes inclu the 16 most sensitive nodes for each constitutive parameter, evaluated with the Sobol dices, as in the first design.Figure 3 shows their spatial distribution (is the result of areas in Figure 2).In the second design, referred to as "Unique Dataset", one single metamodel is employed to simultaneously identify all six constitutive parameters.To maintain impartiality with the first strategy design (same number of nodes per parameter), the strains are measured in 96 nodes to ensure a similar input dimension/information.The 96 nodes include the 16 most sensitive nodes for each constitutive parameter, evaluated with the Sobol indices, as in the first design.Figure 3 shows their spatial distribution (is the result of all areas in Figure 2).The third strategy design, referred to as "Uniform Dataset" model to identify all six constitutive parameters, as in the "Uniqu ever, the strains are measured at 105 evenly spaced nodes across This distribution, shown in Figure 4, has 25 nodes in the central a 40 nodes dispersed in each arm.The number of nodes is similar designs, in order to maintain impartiality.This type of distributio it is important, for the performance of the metamodel, to previ the influence of the parameters is more significant on the strain  The third strategy design, referred to as "Uniform Dataset", also uses a single metamodel to identify all six constitutive parameters, as in the "Unique Dataset" design.However, the strains are measured at 105 evenly spaced nodes across the cruciform specimen.This distribution, shown in Figure 4, has 25 nodes in the central area of the specimen, with 40 nodes dispersed in each arm.The number of nodes is similar to the preceding strategy designs, in order to maintain impartiality.This type of distribution aims to assess whether it is important, for the performance of the metamodel, to previously select nodes where the influence of the parameters is more significant on the strain results.
Metals 2024, 14, x FOR PEER REVIEW The third strategy design, referred to as "Uniform Dataset", also use model to identify all six constitutive parameters, as in the "Unique Datase ever, the strains are measured at 105 evenly spaced nodes across the cruc This distribution, shown in Figure 4, has 25 nodes in the central area of the 40 nodes dispersed in each arm.The number of nodes is similar to the pre designs, in order to maintain impartiality.This type of distribution aims to it is important, for the performance of the metamodel, to previously sele the influence of the parameters is more significant on the strain results.

Machine Learning Technique
Gaussian Process Regression (GPR) is utilized to build the aforementioned metamodels.GPR can be described as a set of any finite number of random variables that follows a Gaussian distribution and are fully described by a mean function and a covariance function [46].Typically, the mean function is assumed to be zero to obtain a simple notation, and so the covariance function is sufficient to define the GPR [47].The GPR metamodel can be defined as: where   is the observed output,   is the Gaussian Process variable and  represents a zero mean Gaussian white noise.If y   is the vector with the desired results present in the dataset and that   is the vector of results to be predicted, the joint normal probability distribution is given by:

Machine Learning Technique
Gaussian Process Regression (GPR) is utilized to build the aforementioned metamodels.GPR can be described as a set of any finite number of random variables that follows a Gaussian distribution and are fully described by a mean function and a covariance function [46].Typically, the mean function is assumed to be zero to obtain a simple notation, and so the covariance function is sufficient to define the GPR [47].The GPR metamodel can be defined as: where y(x) is the observed output, f (x) is the Gaussian Process variable and ϵ represents a zero mean Gaussian white noise.If y x t is the vector with the desired results present in the dataset and that y (x p ) is the vector of results to be predicted, the joint normal probability distribution is given by: with ς 2 ε being the noise variance, I being the identity matrix and each K ′ being a covariance matrix evaluated for all considered points, with X representing the training data from the dataset, and X * the unseen data for which the metamodel will make predictions.Finally, the GPR predictions are given by the following equations: where f * is the vector of predictions (mean), and cov f * represents the covariance of metamodel outputs, which acts as a measure of prediction uncertainty.For the metamodel's training process, it is necessary to define a kernel and an optimizer.The Gaussian Process Regression was adapted in Python language (version 3.11) using the open source GPy package [48].

Dataset Generation
The generation of the dataset is a crucial stage in acquiring the necessary data for training and testing the metamodels.The dataset comprises a collection of numerical simulations results of the biaxial tensile test on a cruciform specimen.A maximum of 4096 simulations were used for training, while 500 simulations were used for testing the metamodels.The numerical simulations of the biaxial test were carried out for fictitious materials, whose constitutive parameters' upper and lower limits are given in Table 1.These limits were used because they represent a wide range of materials.For a clear meaning of the used constitutive parameters' limits, Table 2 shows the limits of the anisotropy coefficients, r 0 , r 45 , and r 90 and yield stresses, for an angle of 0 • , 45 • and 90 • with the rolling direction, respectively.Additionally, Figure 6 represents the hardening limits of the considered fictitious materials, where it can be concluded that a wide range of hardening behaviours were considered in this work.The materials were generated by varying the constitutive parameters according to a Sobol sequence [49], which provides a more uniform representation of all possible parameter's combinations, compared to a random sequence.

Performance Metric
To evaluate the training of the metamodels presented, the training error is calculated using the following expression:

Performance Metric
To evaluate the training of the metamodels presented, the training error is calculated using the following expression: where q is the number of splits of the cross-validation scheme, h is the total number of validation samples, y p iv and y r iv are, respectively, the value predicted by the metamodel and the real value of the parameter for the validation sample v of the cross validation split i.
To evaluate the predictions of each of the metamodels presented, the values of the predictions obtained by the metamodel are compared with the values of the test data set.The average relative error will be calculated using the expression: where y p t and y r t are, respectively, the value predicted by the metamodel and the real value of the parameter to be identified for the test sample t, and m is the total number of test samples.
For evaluating the hardening and anisotropy predictions (yield stresses, and anisotropy coefficients according to angle from rolling direction), new error metrics were defined.The error in r values is given by: where p is the number of directions with the rolling direction (p = 7); d is the rolling direction (d =0 • , 15 where p is the number of directions with the rolling direction (p = 7); d is the rolling direction (d =0 • , 15 • , 30 • , 45 • , 60 • , 75 • , 90 • ); σ r dt and σ p dt are, respectively, the real yield stresses and those obtained with the constitutive parameters predicted by the metamodel for the test sample t; and m is the total number of test samples.The yield stresses values are computed from the Hill'48 yield criterion.For the hardening, the error is evaluated as follows: where w is the number of points used to describe the hardening curve; l is an instant of equivalent plastic strain; Y r t (ε p l ) and Y p t (ε p l ) are, respectively, the real equivalent stresses and the ones obtained with the constitutive parameters predicted by the metamodel for the test sample t; and m is the total number of test samples.A total of w = 20 points uniformly distributed (increments of 0.01) along the hardening curve were used.The equivalent stresses are computed from the Swift law (Equation ( 5)).

Strategy Results
In this section, the identification results of the three strategy designs are presented.Firstly, the performance of the kernel and optimizer used in the Gaussian Process Regression to train the metamodels will be explored.Then, the metamodels will be tested in order to evaluate their identification performance.
To evaluate the best kernel and optimizer to be used within the GPR, each metamodel was trained with 1024 numerical simulations of the cruciform test.The training dataset was split into two subsets, one with 70% of the data, for calibration, and the other with the remaining 30% for validation.The 70/30 division was performed 30 times in a crossvalidation scheme.For each split, the algorithm trains a metamodel with the calibration set, then makes predictions for the validation set, and the training error is calculated (Equation ( 11)).Three kernels (Radial basis function "RBF", Mattern (3/2), and Mattern (5/2)) and three optimizers (Truncated Newton "TNC", Scaled Conjugate Gradient "SCG", and the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno "LBFGS") were studied.In total, nine kernel/optimizer combinations were evaluated.
The errors resulting from the training of the metamodel for each parameter and the nine kernel/optimizer combinations are indicated in Figure 7a-c for the strategy designs, "Multiple Datasets", "Unique Dataset" and "Uniform Dataset", respectively.The errors shown in Figure 7 demonstrates that the trained metamodels are able to accurately predict the constitutive parameters G, N, Y 0 and C of the validation dataset, but the parameters n and F present higher errors, independently of the strategy design.From this Figure, it can also be concluded that the performance is similar for all the kernel/optimizer combinations, although the better predictions were in general obtained for the kernel Radial Basis Function (RBF) and the optimizer Scaled Conjugate Gradient (SCG).Therefore, this combination was employed for the remainder of this work parameter.
The performance of the metamodels was analysed for training dataset of distinct sizes, via the use of 256, 512, 1024, 2048 and 4096 simulations.The metamodels performance is evaluated with unseen data (test dataset), composed of 500 numerical simulations of the cruciform test.Figure 8 shows the error obtained by the three strategy designs using the various train dataset sizes.Based on Figure 8, it is possible to conclude that there is an exponential decrease in the identification error as the size of the training dataset increases.From 1024 simulations onwards, in general, the strategies show a stabilization of the error value (less than 1%), for all parameters.For some of the parameters, errors of less than 1% are obtained for very small samples of 256 (parameter G) and 512 (parameters Y 0 , and N) simulations, regardless of the strategy design.Overall, the results are similar between the various strategy designs, indicating that any approach can lead to acceptable identifications.
Figure 9 shows the time taken to train and test the metamodels for the three strategy designs.As expected, the elapsed time increases as the training dataset increases.The time difference between the "Multiple Datasets" and the "Unique Dataset" designs is small.On the other hand, the "Uniform Dataset" design generally has a shorter training and testing time than the others.Despite having a larger number of data points, which can make the problem more complex, the training and testing time of the "Uniform Dataset" is lower than the other strategies.This may be due to a faster convergence of the metamodel to a solution due to more data in the inputs.The performance of the metamodels was analysed for training dataset of distin sizes, via the use of 256, 512, 1024, 2048 and 4096 simulations.The metamodels perfo mance is evaluated with unseen data (test dataset), composed of 500 numerical simul tions of the cruciform test.Figure 8 shows the error obtained by the three strategy desig using the various train dataset sizes.Based on Figure 8, it is possible to conclude that the is an exponential decrease in the identification error as the size of the training datas increases.From 1024 simulations onwards, in general, the strategies show a stabilizatio of the error value (less than 1%), for all parameters.For some of the parameters, errors less than 1% are obtained for very small samples of 256 (parameter ) and 512 (param ters  , and ) simulations, regardless of the strategy design.Overall, the results are sim ilar between the various strategy designs, indicating that any approach can lead to a  Figure 9 shows the time taken to train and test the metamodels for the three strategy designs.As expected, the elapsed time increases as the training dataset increases.The time difference between the "Multiple Datasets" and the "Unique Dataset" designs is small.On the other hand, the "Uniform Dataset" design generally has a shorter training and testing time than the others.Despite having a larger number of data points, which can make the problem more complex, the training and testing time of the "Uniform Dataset" is lower than the other strategies.This may be due to a faster convergence of the metamodel to a solution due to more data in the inputs.

Noise Analysis
In this section, the robustness of the identification strategy to virtual noise in the strains and forces results is evaluated.The objective of this virtual noise is to simulate the noise that it is inevitable in experimental results of the cruciform test, in order to verify its influence on the identification performance of the strategy designs.For this, the metamod-

Noise Analysis
In this section, the robustness of the identification strategy to virtual noise in the strains and forces results is evaluated.The objective of this virtual noise is to simulate the noise that it is inevitable in experimental results of the cruciform test, in order to verify its influence on the identification performance of the strategy designs.For this, the metamodels previously trained without noise were used (using the combination "RBF" and "SCG").Only the test datasets will have noise and are created based on the original test dataset.The new test dataset is created by adding the "virtual" noise to the original test dataset thought the following equation:

Noise Analysis
In this section, the robustness of the identification strategy to virtual noise in the strains and forces results is evaluated.The objective of this virtual noise is to simulate the noise that it is inevitable in experimental results of the cruciform test, in order to verify its influence on the identification performance of the strategy designs.For this, the metamodels previously trained without noise were used (using the combination "RBF" and "SCG").Only the test datasets will have noise and are created based on the original test dataset.The new test dataset is created by adding the "virtual" noise to the original test dataset thought the following equation: where  is the result with noise,  the original result and ŋ is calculated by a random function (random [−, ]) where  is the desired noise value.
Three levels of noise were examined, 0.01%, 0.5% and 1%.The metamodels were trained with a dataset of 1024 numerical simulations, allowing an error under 1% in the identification free of noise (Figure 8). Figure 10 shows the identification error obtained for the several strategy designs.It can be concluded that, for all parameters and strategy designs, there is an increase in the performance error with the increase in the noise level.The "Multiple Datasets" strategy design stands out as the one that varies the least as noise increases, while the "Uniform Dataset" design is the most affected by the increase in noise.The "Multiple Datasets" keep the testing error below 5% for all parameters except parameter  and  .In particular, the parameters  ,  and  always have an error of less than 1% for any noise level. ), where X noise is the result with noise, X the original result and

Noise Analysis
In this section, the robustness of the identification strategy to virtual noise in the strains and forces results is evaluated.The objective of this virtual noise is to simulate the noise that it is inevitable in experimental results of the cruciform test, in order to verify its influence on the identification performance of the strategy designs.For this, the metamod els previously trained without noise were used (using the combination "RBF" and "SCG").Only the test datasets will have noise and are created based on the original tes dataset.The new test dataset is created by adding the "virtual" noise to the original tes dataset thought the following equation: where  is the result with noise,  the original result and ŋ is calculated by a ran dom function (random [−, ]) where  is the desired noise value.
Three levels of noise were examined, 0.01%, 0.5% and 1%.The metamodels were trained with a dataset of 1024 numerical simulations, allowing an error under 1% in the identification free of noise (Figure 8). Figure 10 shows the identification error obtained for the several strategy designs.It can be concluded that, for all parameters and strategy de signs, there is an increase in the performance error with the increase in the noise level.The "Multiple Datasets" strategy design stands out as the one that varies the least as noise increases, while the "Uniform Dataset" design is the most affected by the increase in noise The "Multiple Datasets" keep the testing error below 5% for all parameters except param eter  and  .In particular, the parameters  ,  and  always have an error of less than 1% for any noise level.
is calculated by a random function (random [−λ, λ]) where λ is the desired noise value.
Three levels of noise were examined, 0.01%, 0.5% and 1%.The metamodels were trained with a dataset of 1024 numerical simulations, allowing an error under 1% in the identification free of noise (Figure 8). Figure 10 shows the identification error obtained for the several strategy designs.It can be concluded that, for all parameters and strategy designs, there is an increase in the performance error with the increase in the noise level.The "Multiple Datasets" strategy design stands out as the one that varies the least as noise increases, while the "Uniform Dataset" design is the most affected by the increase in noise.The "Multiple Datasets" keep the testing error below 5% for all parameters except parameter n and N.In particular, the parameters Y 0 , F and G always have an error of less than 1% for any noise level.
Despite the results obtained, evaluating the models developed in this research by calculating an error for each parameter individually may not be the most accurate way, because anisotropy and hardening are very sensitive to the combination of parameters.In this sense, Figure 11 shows the errors obtained for the anisotropy coefficients (Equation ( 13)), yield stresses (Equation ( 14)) and hardening (Equation ( 15)).In this analysis, all the strategy designs vary in a very similar way as the noise value increases.Unlike the previous results, the strategy "Unique Dataset" is the one that presents lower error values considering the three metrics.The highest error value, of 6%, occurs for the hardening prediction.The "Multiple Datasets" strategy has a maximum error of 8% for the r values.
In summary, the presence of noise in the forces and strains (of the test dataset) significantly affects the quality of the identification.In this situation, the strategy design is particularly important since noise influences the accuracy of the three designs differently.Although the individual parameter identification of the "Multiple Datasets" strategy may be better (Figures 8 and 10), the "Unique Dataset" strategy is the one that can predict more accurately the yield surface since it takes in consideration the interactions between all parameters (Figure 11a,b)).Therefore, the "Unique Dataset" strategy is the approach that stands out as being the most reliable.

Envisaged Experimental Setup
Given the objective of this work, to formulate an identification strategy, the use of computer-generated results instead of experimental data provided a simple and efficient means of validating the proposed methodology.This approach ensured a well-defined behaviour of the tested fictitious materials without the typical errors associated with experimental measurements.In addition, the reliance on computer-generated results allows a direct comparison of the identification obtained with the proposed strategies and pseudo-"experimental" results, particularly with respect to the yield surface and work hardening functions, given our a priori knowledge of the parameter values of the fictitious materials.Obtaining such information in experimental scenarios requires alternative identification methods, such as classical approaches (e.g., tensile tests, shear tests), to identify the parameters of constitutive models that may inadequately capture the mechanical behaviour of the material.In summary, the use of fictitious materials allows a direct comparison of identification results while avoiding the challenges inherent in experimental procedures.Although a noise analysis has been performed to evaluate the influence of experimental noise on parameter identification, the results still need to be experimentally validated.In this section, the experimental implementation strategy is presented and explained.Despite the results obtained, evaluating the models developed in this research calculating an error for each parameter individually may not be the most accurate w because anisotropy and hardening are very sensitive to the combination of parameters this sense, Figure 11 shows the errors obtained for the anisotropy coefficients (Equat ( 13)), yield stresses (Equation ( 14)) and hardening (Equation ( 15)).In this analysis, all strategy designs vary in a very similar way as the noise value increases.Unlike the pre ous results, the strategy "Unique Dataset" is the one that presents lower error values c sidering the three metrics.The highest error value, of 6%, occurs for the hardening p diction.The "Multiple Datasets" strategy has a maximum error of 8% for the r values. .Therefore, the "Unique Dataset" strategy is the approach tha stands out as being the most reliable.

Envisaged Experimental Setup
Given the objective of this work, to formulate an identification strategy, the use o computer-generated results instead of experimental data provided a simple and efficien means of validating the proposed methodology.This approach ensured a well-define behaviour of the tested fictitious materials without the typical errors associated with ex perimental measurements.In addition, the reliance on computer-generated results allow a direct comparison of the identification obtained with the proposed strategies an pseudo-"experimental" results, particularly with respect to the yield surface and wor hardening functions, given our a priori knowledge of the parameter values of the fictitiou materials.Obtaining such information in experimental scenarios requires alternative iden tification methods, such as classical approaches (e.g., tensile tests, shear tests), to identif the parameters of constitutive models that may inadequately capture the mechanical b haviour of the material.In summary, the use of fictitious materials allows a direct com parison of identification results while avoiding the challenges inherent in experiment procedures.Although a noise analysis has been performed to evaluate the influence o experimental noise on parameter identification, the results still need to be experimentall validated.In this section, the experimental implementation strategy is presented and ex plained.
The flowchart presented in Figure 12 illustrates the steps for an experimental impl mentation.Firstly, the proposed strategy envisaged the measurement of the strains durin the test trough Digital Correlation Image (DIC).Considering the numerical results, th speckle pattern and the camera's resolution must ensure at least a measurement resolutio of about 0.5 mm and guaranteeing an error in strains below 1%.The force values ar The flowchart presented in Figure 12 illustrates the steps for an experimental implementation.Firstly, the proposed strategy envisaged the measurement of the strains during the test trough Digital Correlation Image (DIC).Considering the numerical results, the speckle pattern and the camera's resolution must ensure at least a measurement resolution of about 0.5 mm and guaranteeing an error in strains below 1%.The force values are envisaged to be measured with the use of load cells in both arms of the cruciform apparatus.During the test, it is also necessary to measure the displacement of both arms.After the measuring, an interpolation is necessary to obtain the forces for certain displacements, and the strains for the desired regions, accordingly to the defined strategy, for the same displacements.Eventually, the regions that were defined as more interesting (Figure 3 represents the regions of the strategy that was considered the more reliable) may reveal themselves as inviable in an experimental scenario, and some of them might need to be changed.Having such data, the code implementation is possible, manipulating the dataset in order to create the metamodel and getting the parameter values as output.
the measuring, an interpolation is necessary to obtain the forces for certain displacements, and the strains for the desired regions, accordingly to the defined strategy, for the same displacements.Eventually, the regions that were defined as more interesting (Figure 3 represents the regions of the strategy that was considered the more reliable) may reveal themselves as inviable in an experimental scenario, and some of them might need to be changed.Having such data, the code implementation is possible, manipulating the dataset in order to create the metamodel and getting the parameter values as output.

Conclusions
A strategy for identifying constitutive parameters of the hardening law and the anisotropy yield criterion is presented.The proposed approach uses Gaussian Process Regression to construct metamodels based on biaxial tensile tests performed on a cruciform specimen.Three different design strategies for parameter identification are presented: "Multiple Datasets", "Unique Dataset" and "Uniform Dataset".
The "Unique Dataset" and "Uniform Dataset" designs use a single metamodel to identify all six constitutive parameters simultaneously, while the "Multiple Datasets" design uses six different metamodels, each dedicated to isolating a single constitutive parameter.The study explores different kernels and optimizers within the Gaussian Process Regression framework to assess the identification quality of the metamodels.The investigation shows that different kernels and optimizers generally result in similar training errors.However, the Radial Basis Function (RBF) kernel and the Scaled Conjugate Gradient (SCG) optimizer consistently produce the lowest training errors.The performance of the three strategy designs is then evaluated and compared in scenarios with and without noise in the test results.In noiseless conditions, the three designs show comparable identification errors.Using training datasets with over 1024 simulations, constitutive parameters are identified with an error of less than 1%, regardless of the strategy design.However, when noise is introduced into the test results, significant differences between the strategies emerge.The "Unique Dataset" strategy emerges as the superior performer, with the smallest identification errors for the anisotropy coefficients and yield stresses, with errors not exceeding 6%.

Conclusions
A strategy for identifying constitutive parameters of the hardening law and the anisotropy yield criterion is presented.The proposed approach uses Gaussian Process Regression to construct metamodels based on biaxial tensile tests performed on a cruciform specimen.Three different design strategies for parameter identification are presented: "Multiple Datasets", "Unique Dataset" and "Uniform Dataset".
The "Unique Dataset" and "Uniform Dataset" designs use a single metamodel to identify all six constitutive parameters simultaneously, while the "Multiple Datasets" design uses six different metamodels, each dedicated to isolating a single constitutive parameter.The study explores different kernels and optimizers within the Gaussian Process Regression framework to assess the identification quality of the metamodels.The investigation shows that different kernels and optimizers generally result in similar training errors.However, the Radial Basis Function (RBF) kernel and the Scaled Conjugate Gradient (SCG) optimizer consistently produce the lowest training errors.The performance of the three strategy designs is then evaluated and compared in scenarios with and without noise in the test results.In noiseless conditions, the three designs show comparable identification errors.Using training datasets with over 1024 simulations, constitutive parameters are identified with an error of less than 1%, regardless of the strategy design.However, when noise is introduced into the test results, significant differences between the strategies emerge.The "Unique Dataset" strategy emerges as the superior performer, with the smallest identification errors for the anisotropy coefficients and yield stresses, with errors not exceeding 6%.
These results highlight the importance of strategy selection, especially in the presence of noise.The "Unique Dataset" design proves to be the most effective and robust option for accurate parameter identification when considering the interaction between them.It showed resilience to increasing noise levels and establishing itself as the most reliable strategy under such conditions.

Figure 2 .
Figure 2. Spatial distribution of the most sensitive strain nodes for the constitutive parameter: (a

Figure 2 .
Figure 2. Spatial distribution of the most sensitive strain nodes for the constitutive parameter: (a-f).

Figure 3 .
Figure 3. Spatial distribution of the strain nodes used in the strategy de

Figure 4 .
Figure 4. Spatial distribution of the strain nodes used in the strategy de

Figure 3 .
Figure 3. Spatial distribution of the strain nodes used in the strategy design "Unique Dataset".

Figure 3 .
Figure 3. Spatial distribution of the strain nodes used in the strategy design "Uniq

Figure 4 .
Figure 4. Spatial distribution of the strain nodes used in the strategy design "Unif

Figure 4 .
Figure 4. Spatial distribution of the strain nodes used in the strategy design "Uniform Dataset".

Figure 5 19 Figure 5 .
Figure5shows a representation of every strategy design and allows a better understanding of the differences between them parameter.

Figure 6 .
Figure 6.Hardening limits of the considered fictitious materials.

Figure 6 .
Figure 6.Hardening limits of the considered fictitious materials.

Metals 2024 ,Figure 7 .
Figure 7. Training errors for the strategy designs.

Figure 7 .
Figure 7. Training errors for the strategy designs.

Figure 8 .
Figure 8. Evolution of the identification error with the increasing number of training simulations.

Figure 8 .
Figure 8. Evolution of the identification error with the increasing number of training simulations.Metals 2024, 14, x FOR PEER REVIEW 13 of 19

Figure 9 .
Figure 9.Time taken to train and test the metamodels for each strategy design.

Figure 9 .
Figure 9.Time taken to train and test the metamodels for each strategy design.

Figure 9 .
Figure 9.Time taken to train and test the metamodels for each strategy design.

Figure 9 .
Figure 9.Time taken to train and test the metamodels for each strategy design.

Metals 2024 ,Figure 10 .
Figure 10.Evolution of the identification error with the increasing of the noise level.

Figure 10 .
Figure 10.Evolution of the identification error with the increasing of the noise level.

Figure 11 .
Figure 11.Evolution of the identification error with the increasing of the noise level for (a) aniso ropy coefficients; (b) yield stresses; (c) hardening.

Figure 11 .
Figure 11.Evolution of the identification error with the increasing of the noise level for (a) anisotropy coefficients; (b) yield stresses; (c) hardening.

Table 2 .
Limits of anisotropy coefficients and yield stresses.
anisotropy coefficients and those obtained with the constitutive parameters predicted by the metamodel for the test sample t; and m is the total number of test samples.The r values are computed from the Hill' 48 yield criterion.The error in yield stresses is given by: • , 30• , 45• , 60 • , 75 • , 90 • ); r r dt and r p dt are, respectively, the real