Next Article in Journal
The Thermal Conductivity of 3D Printed Plastic Insulation Materials—The Effect of Optimizing the Regular Structure of Closures
Next Article in Special Issue
Evaluation of Ti–Mn Alloys for Additive Manufacturing Using High-Throughput Experimental Assays and Gaussian Process Regression
Previous Article in Journal
Low-Temperature Magnetocaloric Properties of V12 Polyoxovanadate Molecular Magnet: A Theoretical Study
Previous Article in Special Issue
Nonlinear Regression Operating on Microstructures Described from Topological Data Analysis for the Real-Time Prediction of Effective Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodology for the Statistical Calibration of Complex Constitutive Material Models: Application to Temperature-Dependent Elasto-Visco-Plastic Materials

by
Juan Luis de Pablos
1,2,
Edoardo Menga
3 and
Ignacio Romero
1,2,*
1
IMDEA Materials Institute, Eric Kandel, 2, 28906 Getafe, Spain
2
Mechanical Eng. Department, Universidad Politécnica de Madrid, José Gutiérrez Abascal, 2, 28006 Madrid, Spain
3
AIRBUS Operations S.L., John Lennon S/N, 28906 Getafe, Spain
*
Author to whom correspondence should be addressed.
Materials 2020, 13(19), 4402; https://doi.org/10.3390/ma13194402
Submission received: 18 August 2020 / Revised: 22 September 2020 / Accepted: 28 September 2020 / Published: 2 October 2020
(This article belongs to the Special Issue Empowering Materials Processing and Performance from Data and AI)

Abstract

:
The calibration of any sophisticated model, and in particular a constitutive relation, is a complex problem that has a direct impact in the cost of generating experimental data and the accuracy of its prediction capacity. In this work, we address this common situation using a two-stage procedure. In order to evaluate the sensitivity of the model to its parameters, the first step in our approach consists of formulating a meta-model and employing it to identify the most relevant parameters. In the second step, a Bayesian calibration is performed on the most influential parameters of the model in order to obtain an optimal mean value and its associated uncertainty. We claim that this strategy is very efficient for a wide range of applications and can guide the design of experiments, thus reducing test campaigns and computational costs. Moreover, the use of Gaussian processes together with Bayesian calibration effectively combines the information coming from experiments and numerical simulations. The framework described is applied to the calibration of three widely employed material constitutive relations for metals under high strain rates and temperatures, namely, the Johnson–Cook, Zerilli–Armstrong, and Arrhenius models.

1. Introduction

Modeling has become a very effective way to analyze, in a first instance, complex engineering problems. Almost the totality of engineers, either in academia or in the industry, claim to take benefit from these techniques, considering them to be irreplaceable for their work.
Though the reliability of models keeps constantly increasing, and therefore the trust placed on their predictions, there is still need for understanding the intrinsic uncertainties that affect simulation, for estimating their effect on predictions, and for developing efficient methodologies to reduce them in a cost-effective manner. In this respect, an interesting and promising approach has emerged in recent years. It consists of employing advanced statistical methods not only to assess the uncertainty in a model but also to guide the experimental campaign that needs to be carried out to feed the parameter calibration. One of these tools is Global Sensitivity Analysis (GSA), a very useful strategy when it comes to analyzing the influence of all the parameters participating in a model. Improving local techniques introduced in the 1980s [1], GSA methods were proposed much later to account for the influence of parameters in an overall and rigorous fashion [2].
One important limitation of GSA techniques is that they require large amounts of simulated data as input. Numerical experiments obtained, for instance, through finite element (FE) simulations, demand a huge amount of computational resources that are often unavailable. A convenient remedy to this problem is to employ meta-models that provide reasonable approximations to the models’ response, but at a fraction of their computational cost. These types of models are built by sampling the original ones, as illustrated in Figure 1, and were originally proposed to optimize processes [3]. Initially known as Response Surface Models (RSM), they rapidly evolved and became emulators of computational codes at a very reduced cost. As a result, they have been utilized in a wide range of sensitivity analyses and applications [4,5,6].
There exist several families of meta-models. Some of the most commonly employed are the ones based on Kriging [7] and Radial Basis Functions (RBF) [8]. Both of them are generally accepted as good methods to efficiently capture trends associated with small data sets. Since they accurately adapt to available information, they must be re-calibrated when new inputs are provided [9]. The Bayesian approach, on the other hand, is a well-known technique that has been successfully employed in several scientific disciplines for parameter selection. See, for example, one of the very first generic applications, developed by Guttman [10], in which this inference procedure is already used to choose the best manufacturing parameters to make the widest possible population of fabricated items lie within the specified tolerance limits. More recent works have improved the Bayesian inference methodology (see, e.g., [11]).
Bayesian inference can be used systematically for the calibration of model parameters, taking into consideration the uncertainties due to the model itself, the experimental measurements, noise, etc. [12,13,14]. This approach has become relatively standard, not only providing optimized parameter values but a complete Gaussian distribution for them.
In this work, we will combine GSA with Bayesian calibration because we believe that this combination is extremely powerful for understanding computer models, and drawing as much information as possible from experiments, be them numerical or physical. This mix of techniques is not new, and similar ones have been considered in the past. For example, sensitivity analysis and Bayesian calibration were employed together in [15], trying to assess multiple sources of uncertainty in waste disposal models by considering independent and composite scenarios, obtaining predictive output distributions using a Bayesian approach, and later performing a variance-based sensitivity analysis. In addition, the work by [16] proposes a procedure to evaluate the sensitivity of the parameters and a posterior calibration of the most important ones applied to a model describing the chemical composition of mass of waters.
In the current article, we explore the use of GSA and Bayesian calibration for complex constitutive models, an application that has not been previously considered for this type of analysis and that can greatly benefit from it. More precisely, we study three fairly known constitutive material models suitable for metals subjected to extreme conditions, namely, Johnson–Cook [17], Zerilli–Armstrong [18], and Arrhenius-type [19] models. These are fairly complex constitutive relations that depend on a relatively large number of material parameters that need to be adjusted for each specific material and test range. The actual implementations of the three can be found in the publicly available material library MUESLI [20], and we have used them together with standard explicit finite element calculations.
The remainder of the article is structured as follows. In Section 2, we will outline the theoretical principles in which the statistical theory employed is based on, as well as the three constitutive material models used in the study. In Section 3, we describe the application of the presented framework to the analysis of Taylor’s impact test [21,22], an experiment often used to characterize the elastoplastic behavior of metals under high strain rates. The results of our investigation are reported in Section 4, providing insights for the three constitutive models. Finally, Section 5 collects the main findings and conclusions of the study.

2. Fundamentals

2.1. Global Sensitivity Analysis

Global Sensitivity Analysis (GSA) refers to a collection of techniques that allow identifying the most relevant variables in a model with respect to given Quantities of Interest (QoIs). They focus on apportioning the output’s uncertainty to the different sources of uncertain parameters [2] and define qualitative and quantitative mappings between the multi-dimensional space of stochastic variables in the input and the output. The most popular GSA techniques are based on the decomposition of the variance’s output probability distribution and allow the calculation of Sobol’s sensitivity indices.
According to Sobol’s decomposition theory, a function can be approximated as the sum of functions of increasing dimensionality and orthogonal with respect to the standard L 2 inner product. Hence, given a mathematical model y = f ( x ) with n parameters arranged in the input vector x, the decomposition can be expressed as:
f ( x 1 , , x n ) = f 0 + i = 1 n f i ( x i ) + 1 i < j n f i j ( x i , x j ) + + f 1 , 2 , , n ( x 1 , , x n ) ,
where f 0 is a constant and f are functions with domains of increasing dimensionality. If we consider that f is defined on random variables X i U ( 0 , 1 ) , i = 1 , , n , then the model output is itself a random variable with variance
D = Var [ f ] = R n f 2 ( x ) d x f o 2 .
Integrating Equation (1) and using the orthogonality property of the functions f , we note that the variance itself can be decomposed in the sum
D = i = 1 n D i + 1 i < j n D i j + + D 1 , 2 , , n .
This expression motivates the definition of the Sobol indices
S i 1 , , i s = D i 1 , , i s / D ,
that trivially satisfy
i = 1 n S i + 1 i < j n S i j + + S 1 , 2 , , n = 1 .
This decomposition of the total variance D reveals the main metrics employed to assess the relevance of each parameter in the scatter of the quantity of interest f. The relative variances S i are referred to as the first order indices or main effects and gauge the influence of the i-th parameter on the model’s output. The total effect or total order sensitivities associated with the i —parameter, including its influence when combined with other parameters, is calculated as
S T i = I i D i 1 , , i s , I i = ( i 1 , , i s ) : k , 1 k s , i k = i .
The widespread use of the main and total effects as sensitivity measures is due to the relative simplicity of the formulas and algorithms that can be employed to calculate or approximate them ([2], Chapter 4). Specifically, the number of simulations required to evaluate these measures is N s ( n + 2 ) , where N s is the so-called base sample, a number that depends on the model complexity varying from a few hundreds to several thousands, and n is, as before, the number of parameters of the model (we refer to ([2], Chapter 4) for details on these figures). To calculate sensitivity indices, it proves essential to first create a simple meta-model that approximates the true model, because a limited set of runs is supposed to suffice for building a surrogate that, demanding far fewer computational resources, can then be run a large number of times to complete the GSA.

2.2. Meta-Models

A meta-model is a model for another model. That is, a much-simplified version of a given model that can provide, however, similar predictions as the original one for the same values of the parameters. In this work, we restrict our study to linear meta-models. To describe them, let us assume that the model we are trying to simplify depends on N p parameters and we denote the quantity of interest, assumed for simplicity to be a scalar, as y. For any collection of parameters x R N p , we define y ^ ( x ) to be the corresponding value of the quantity of interest and slightly abusing the notation we write
y ^ ( x ) = [ y ^ ( x 1 ) , y ^ ( x 2 ) , , y ^ ( x N ) ] ,
where N is the number of samples and x = [ x 1 , x 2 , , x N ] is an array of N samples of the parameters. Then, a meta-model is a function Y ^ : R N p R that approximates y ^ and is of the form
Y ^ ( x ) : = k = 1 N k η k h k ( x ) .
In this equation, and later, h k are the kernels of the approximation and η k refer to the weights. Abusing slightly the notation again, we express the relation between the model and its meta-model as
y ^ ( x ) Y ^ ( x ) = k = 1 N k η k h k ( x ) ,
or in compact form,
y ^ ( x ) Y ^ ( x ) : = H ( x ) η ,
where H is the so-called kernel matrix that collects all the kernel functions.
The precise definition of a meta-model depends, hence, on the number and type of kernel functions h k and the value of the regression coefficients η k . Given an a priori choice for the kernels, the weights can be obtained from a set of model evaluations y ^ ( x ) employing a least-squares minimization. Given, as before, an array of sample parameters x and their model evaluation y ^ ( x ) , the vector η can be calculated in closed form with the solution of the normal equations to the approximation. That is,
η = H ( x ) T H ( x ) 1 H ( x ) T y ^ ( x ) .
As previously indicated, the kernel functions belong to a set that must be selected a priori. In the literature, several classes of kernel have been proposed for approximating purposes and in this work we select anisotropic Radial Basis Functions (RBF). This type of kernel has shown improved accuracy as compared with standard RBF, particularly when only a limited set of model evaluations is available [23].
A standard RBF is a map K : R N p × R N p R of the form
K ( x , z ) = k ( r ( x , z ) ) ,
where k : R R is a monotonically decreasing function and r ( x , z ) : = | x z | is the Euclidean distance. The anisotropic radial kernels redefine the function K to be of the form
K ( x , z ) = exp [ ϵ i = 1 N d γ i 2 ( x i z i ) 2 ] = exp [ ϵ ( x z ) T Γ ( x z ) ] ,
where Γ = diag ( γ 1 2 , , γ N d 2 ) is a diagonal, positive definite matrix that scales anisotropically the contribution of each direction in the difference x z , and ϵ > 0 is the shape parameter of the kernel.

2.3. Bayesian Inference and Gaussian Processes

Bayesian inference is a mathematical technique used to improve the knowledge of probability distributions extracting information from sampled data [24]. It is successfully employed for a wide variety of applications in data science and applied sciences. For instance, it has been used in conjunction with techniques such as machine learning and deep learning in fields like medicine [25], robotics [26], earth sciences [27], and more. In this article, we employ Bayesian methods to find the optimal value of model parameters as a function of prior information and observed data. More specifically, we are concerned with model calibration and using it in combination with meta-models to obtain realistic parameter values for complex constitutive relations and their uncertainty quantification, with an affordable computational cost.
Some of the most robust techniques for calibration are based on non-parametric models for nonlinear regression [28]. Here, we will employ Gaussian processes to represent in an abstract fashion the response of a simulation code to a complex mechanical problem employing the material models that we are set to study. We summarize next the main concepts behind these processes.
A Gaussian process is a set of random variables such that any finite subset of them has a Gaussian multivariate distribution [28]. Such a process is completely defined by its mean and covariance, which are functions. If the set of random variables is indexed by points x X R d , then when the random variables have a scalar value f ( x ) , the standard notation employed is
f ( x ) GP m ( x ) , k ( x , x ) ,
where m : X R and k : X × X R are, respectively, the mean and covariance. The mean function can be arbitrary, but the covariance must be a positive function. Often these two functions are given explicit expressions depending on hyperparameters. In simple cases, the average is assumed to be zero, but often it is assumed to be of the form
m ( x ) = g ( x ) T β ,
where g : R d R g is a vector of known basis functions and β R g is a vector of basis coefficients. The choice of the covariance function is the key aspect that determines the properties of the Gaussian process. It is often selected to be stationary, that is, depending only on a distance d = d ^ ( x , x ) . In particular, we will employ a covariance function such as
c ( x , x ) = σ 2 r ( x , x ) ,
where σ 2 is a variance hyperparameter and r : R d × R d R has been chosen as the Màtern C 5 / 2 function, an isotropic, differentiable, stationary kernel commonly used in statistical fitting that is of the form
r ( x , x ) = 1 + 5 d ^ 2 ( x , x ) ψ + 5 d ^ 2 ( x , x ) 3 ψ 2 exp 3 d ^ 2 ( x , x ) ψ ,
that uses a length-scale hyperparameter ψ . For the Gaussian process described, the collection of hyperparameters can be collected as χ = ( β , σ 2 , ψ ) .
Let us now describe in which sense Bayesian analysis can be used for model calibration. The value of a computer model depends on the value of some input variables x that are measurable, and some parameters t that are difficult to determine because they are not directly measurable. Let us assume that t = θ is the true value of the parameters, which is unknown and we would like to determine based on available data. Given some input variable x , a physical experiment of the problem we want to model will produce a scalar output z and it will verify
z = η ( x , θ ) + δ ( x ) + ε ( x ) .
In this equation, η ( x , θ ) is the value of the computer model evaluated at the input variable x and the true parameter θ , δ ( x ) is the so-called model inadequacy and ε ( x ) is the observation error. This last term can be taken to be a random variable with a Gaussian probability distribution N ( 0 , λ 2 ) . The functions η and δ are completely unknown so we can assume them to be Gaussian processes with hyperparameters χ η and χ δ , respectively.
If θ , χ η , χ δ , λ 2 were known, we could study the multivariate probability distribution of the output z using Equation (18) for any set of inputs ( x 1 , x 2 , , x s ) . However, we are interested in solving the inverse problem: we have a set of experimental and computational data and we would like to determine the most likely probability distribution for θ and the hyperparameters, a problem that can be effectively addressed using Bayes’ theorem.
Bayes’ theorem states that, given a prior probability for the parameters ( θ , χ η , χ δ , λ 2 ) indicated as p ( θ , χ η , χ δ , λ 2 ) , the posterior probability density function for these parameters after obtaining the data Δ is
p ( θ , χ η , χ δ , λ 2 | Δ ) p ( θ , χ η , χ δ , λ 2 ) p ( Δ | θ , χ η , χ δ , λ 2 ) .
The prior for the parameters and hyperparameters can be taken as Gaussian, or any other probability distribution that fits our initial knowledge. Assuming that the parameters and hyperparameters are independent, we have in any case that
p ( θ , χ η , χ δ , λ 2 ) = p ( θ ) p ( χ η , χ δ , λ 2 ) .
In addition, since Equation (18) indicates that the output is the sum of three random variables with Gaussian distributions, z itself is a Gaussian.
To apply Bayes’ theorem, it remains to calculate the likelihood p ( Δ | θ , χ η , χ δ , λ 2 ) . To this end, the hyperparameters of η and δ are collected together with the observation error ε and we assume that the conditioned random variable Δ | θ , χ η , χ δ , λ 2 has a normal probability distribution of the form
N { E [ Δ | θ , χ η , χ δ , λ 2 ] , Var [ Δ | θ , χ η , χ δ , λ 2 ] } .
Here, E [ · ] and Var [ · ] refer to the expectation and variance, respectively. Finally, in order to obtain the segregated posterior probability density of the parameters p ( θ | Δ ) , we should integrate out χ η , χ δ and λ 2 ; but, due to the high computational cost involved, this is typically done using Monte Carlo methods [29]. Details of this process fall outside the scope of the present work and can be found in standard references [12].

2.4. Material Models

In Section 4, a sensitivity and calibration procedure is applied to three relatively complex material models employed in advanced industrial applications. Here, we summarize them, listing all the parameters involved in their description. The remainder of the article will focus on ranking the significance of these parameters, their influence in the predictions, and the determination of their probability distributions.

2.4.1. Johnson–Cook Constitutive Relations

The Johnson–Cook (JC) constitutive model [17] is commonly used to reproduce the behavior of metals subjected to large strains, high strain rates, and high temperatures. It is not extremely accurate in all ranges of strain rates and temperatures, but it is simple to implement, robust, and, as a result, has been employed over the years in a large number of applications [30,31,32].
Johnson–Cook’s model is a J 2 classical plasticity constitutive law in which the yield stress σ y is assumed to be of the form:
σ y = ( A + B ε p n ) ( 1 + C log ε ˙ p * ) ( 1 T * m ) ,
where ε p refers to the equivalent plastic strain. The first term in expression (22) accounts for the quasistatic hardening, including the initial yield stress, A, and the constants representing the strain hardening, B, and the exponent n. The second term in (22) is related to the hardening due to strain rate effects, containing the strain rate constant, C and also the dimensionless plastic strain rate, ε ˙ * = ε ˙ p ε ˙ p 0 , where ε ˙ p 0 is a reference plastic strain rate, often taken to be equal to 1. Finally, the third term accounts for the effects of temperature, including the thermal softening exponent, m, and also the so-called “homologous temperature”
T * = T e x p T r o o m T m e l t T r o o m .
Here, T e x p is the experimental temperature at which the material is being modeled, T m e l t is the melting temperature of the material and T r o o m is the ambient temperature.

2.4.2. Zerilli–Armstrong Constitutive Relations

The Zerilli–Armstrong (ZA) model [18] was conceived as a new approach to the metal plasticity modeling, using a finite deformation formulation based on constitutive relations related to the physical phenomenon of dislocation mechanics, in contrast to other purely phenomenological constitutive relations, such as the previously described Johnson–Cook model. These relations have been proved to be well suited for the modeling of the response of metals to high strains, strain rates, and temperatures. Its numerical implementation, although more complicated than the JC model, is still relatively simple, justifying its popularity.
The ZA relations were developed in order to respond to the need of a physical-based model that could include the high dependence of the flow stress of metals and alloys on dislocation mechanics. For instance, aspects like the grain size, thermal activation energy, or the characteristic crystal unit cell structure have a dramatic effect in the plastic response of these materials, according to experimental data. Hence, the ZA model is still a J 2 plasticity model in which the yield stress becomes a function of strain, strain rate, and temperature, but with their relative contributions weighted by constants that have physical meaning. The yield stress is assumed to be
σ y = ( C 1 + C 2 ε p 1 2 ) exp ( C 3 T + C 4 T log ε p ˙ ) + C 5 ε p n + k l 1 2 + σ G .
In this relation, C 1 , C 2 , C 3 , C 4 , C 5 , k , σ G , l are constants. The constants σ G , k , l represent, respectively, the contributions to the yield stress due to solutes, the initial dislocation density, and the average grain diameter. The remaining constants are selected to distribute the contribution to the hardening of the plastic strain, its rate, and the temperature. Based on the crystallographic structure of the metal under study, some of the constants C i will be set to zero. For example, fcc metals such as copper will have C 1 = C 5 = 0 . Iron and other bcc metals will be represented with equations that have C 2 = 0 . These differences are mainly based on the physical influence of the effects of strain on each type of structure, which is especially dominant when it comes to modeling fcc metals, whereas the strain-rate hardening, thermal softening, and grain size have a greater effect on bcc metals.

2.4.3. Arrhenius-Type Model Constitutive Relations

Last, we consider an Arrhenius-type (AR) constitutive model [19], a strain-compensated equation aiming to reproduce the behavior of metals at high temperature. As in the previous constitutive laws, the AR model is a classical J 2 plasticity model with an elaborated expression for the yield stress σ y . In this case, it is defined as
σ y = 1 α ( ε p ) sinh 1 Z ( ε p , ε p ˙ , T ) A ( ε p ) 1 / n ,
where α : R R and A : R R are two functions employed to represent the influence of the plastic strain on the response and n is a material exponent. On the other hand, Z : R × R × R + R is the so-called Zener–Holloman function, accounts for the effects of strain rate ε p ˙ and temperature T, and is defined as
Z ( ε p , ε p ˙ , T ) : = ε p ˙ exp Q ( ε p ) R T ,
where R is the universal gas constant and Q : R R is the activation energy, assumed to be a third-order polynomial.
The scalar functions that enter the definition of the yield function are thus α , A and Q. The three are defined parametrically as
α ( ε p ) = α 0 + α 1 ε p + α 2 ε p 2 + α 3 ε p 3 , Q ( ε p ) = Q 0 + Q 1 ε p + Q 2 ε p 2 + Q 3 ε p 3 , A ( ε p ) = exp A 0 + A 1 ε p + A 2 ε p 2 + A 3 ε p 3 ,
where α 0 , , α 3 , Q 0 , , Q 3 , and A 0 , , A 3 are material constants determined experimentally. Depending on the author, these three functions might adopt slightly different forms leading to potentially higher accuracy at the expense of more difficulties for their calibration.

3. Application

The methodology presented in Section 2 is applied now to a relevant example in mechanics of deformable solids, namely, Taylor’s impact test [21,22]. In what follows, we will study the calibration of the three material models of Section 2.4 based on the outputs obtained from this well-known test that consists of a high-velocity impact of a metallic anvil onto a rigid wall. As illustrated in Figure 2, the impact creates irrecoverable deformations in the anvil that, due to the symmetry of the problem, can be macroscopically quantified by measuring the changes in the diameter and length of the impactor.
Figure 3 illustrates the procedure advocated for our numerical analysis: starting from a prior distribution for the material parameters, a meta-model of Taylor’s impact test is constructed based on anisotropic RBF. The meta-model, once completed, is cheap to run and can be used to perform sensitivity analyses and to update, via Bayesian calibration, the probability distribution of the original parameters. If deemed necessary, the latter probability distribution can be reintroduced in the Bayesian calibration, this time as prior, as illustrated in Figure 3, until the parameter distribution converges to an (almost) stationary function. In theory, one could use the posterior probabilities to start the whole process, helping to build a better meta-model that will be later employed in the GSA and calibration. This route, however, might be too expensive in real life applications.
To build the meta-model, five impact velocities are selected over a typical range of Taylor’s bar experiments: namely, 200, 230, 260, 290, and 320 m / s . Then, the different tests for each impact velocity are simulated considering a Cr-Mo steel as the anvil’s material. Each impact velocity point consists of 612 simulations for the Johnson–Cook and Zerilli–Armstrong models, and 1800 for the Arrherius-type, since the latter involves a larger number of material parameters and requires more data in order to get reliable levels of accuracy when constructing the meta-models. The parameters fed to the simulations have been sampled from uniform distributions centered at nominal values taken from the literature [19,33] with ± 10 % ranges, varying them according to a Low Discrepancy Design method (LDD), or Sobol sequence [34]. The latter is obtained with a deterministic algorithm that subdivides each dimension of the sample space into 2 N points, while ensuring good uniformity properties.
The QoIs selected for the meta-model are Δ R and Δ L ; that is, the changes in radius and length of the anvil after impact. Using the methods described in Section 2.2, an RBF-based meta-model is obtained for each material and impact velocity. The meta-models now serve as the basis for the Global Sensitivity Analysis that will identify the most significant parameters in each model, ruling out from the Bayesian calibration those whose influence on the QoIs is relatively small. Finally, for each of the material models, a full Bayesian analysis will be done based on the concepts of Section 2.3, providing a fitted Gaussian process per model and QoI. This last step demands standard but cumbersome operations and has been performed using a freely available R package [35].
To complete the Bayesian calibration, we need meta-model predictions for arbitrary velocities of the impactor. Since the available meta-models are only defined for five selected velocities, we will interpolate linearly their predictions for the QoI at any intermediate velocity (see Figure 4, Figure 5 and Figure 6). This strategy will speed-up the generation of data for the Bayesian analysis. To validate it, we will first confirm that the error made by this interpolation is negligible. For that, we will compare solutions obtained with FE simulations at arbitrary velocities of the anvil against interpolated meta-model predictions.
Once accepted, this strategy for combining meta-models will result in an extremely cheap source of simulated data that will be used to study the material models. For each of the latter, the fitting data will consist of n 1 = 20 sets of observed points Δ 1 = { x 1 , , x n 1 } , plus n 2 = 500 sets of computational outputs, derived from the meta-models interpolation, Δ 2 = ( x 1 , t 1 ) , , ( x n 2 , t n 2 ) , where x i and x i are the experimental impact velocities and interpolated impact velocities acting as the variable input, respectively, while t i are the parameter inputs to the meta-model. To assess the results of the meta-models interpolation, the FE cases against which are to be compared will be generated employing the same parameter inputs t i and impact velocities x i .
In this work, we have chosen to calibrate CrMo steel because the parameters for the JC, ZA, and ARR models could be found in the literature for this material. However, no experimental measurements are available for Taylor tests with anvils of this material. Hence, we follow an alternative avenue to obtain data, one that is often employed in statistical analyses [36,37]. The idea is to generate data from finite element simulations (20 in our procedure) using a fixed material model with nominal parameters, exploring all impact velocities and adding white Gaussian noise to all the measured QoIs, consistent with Equation (18).
To complete the problem definition, it remains to choose prior probability distributions for the complete set of material parameters θ , the variance of the global observation error λ 2 , and the hyperparameters χ δ of the discrepancy function. Table 1 and Table 2 describe the probability distributions chosen for each parameter in the three models and the references employed for their choice.

4. Results

We now present the results of the GSA analyses, the meta-models interpolation and calibration procedure for the three material models described in Section 2.4 based on the results obtained from the experiments of Taylor’s anvil impact. These are obtained from the RBF meta-model whose construction is detailed in Section 2.2 and Section 3.

4.1. Sensitivity Analysis

First, we present the results of the sensitivity analyses, as summarized in the pie charts of Figure 7, Figure 8 and Figure 9. For each of the three material models, these figures depict the contributions, at two impact velocities, of the parameters to global variance, considering independently the two QoIs: namely, the increments in anvil’s radius and length.
In all the three models, the pie charts expressing the parameters’ influence are slightly different, as expected from a complex experiment. However, the most significant result of the analysis performed is that the most influential parameters of each material model coincide in the four sensitivity figures.
To proceed, we identify for each material model the smallest set of parameters whose combined influence accounts for at least 90% of the total QoI variance in all the tests performed and we summarize these findings in Table 3. These results are useful in two ways. First, they simplify the ensuing Bayesian calibration, limiting the number of hyperparameters for the Gaussian processes and the computations involved in the likelihood calculations. Second, from a quantitative point of view, it can be employed by users of material models in numerical simulations to reveal the most influential parameters in the three laws considered, where most of the calibration efforts should be placed, irrespective of the methodology followed to this end.

4.2. Linear Interpolation of Meta-Models

In Section 3, it was proposed to interpolate linearly meta-model predictions to extend the latter to arbitrary anvil velocities. Next, in Figure 10, Figure 11 and Figure 12, we show a comparison between the predictions of the QoI Δ R obtained from meta-model interpolation and full FE simulations.
Observing these plots and the results collected in Table 4, we conclude that the meta-model interpolation for the Johnson–Cook and Zerilli–Armstrong models provides accurate predictions of Δ R for arbitrary impact velocities. In contrast, the interpolations of the Arrhenius-type model are not as accurate, possibly due to the relatively higher non-linearity of its constitutive equation, affecting directly the flow stress computation. Without a direct means of verifying this assertion, we might speculate that these non-linearities trigger complex deformation patterns in the anvil once the material enters the plastic regime. However, given that the maximum relative error is below 7 × 10 2 in all three cases, we can accept the interpolated predictions for the three constitutive models. This choice will result in huge computational savings for the Bayesian calibration.
We have also validated the linear interpolation strategy for the quantity Δ L . The results are very similar to the ones obtained for Δ R and the interpolation plots are not presented. Table 5 collects the errors made by the meta-model for Δ L as compared with the FE solution, leading us to conclude, as for the previous QoI, that the interpolated predictions are accurate enough.

4.3. Bayesian Calibration

Finally, we proceed to perform a Bayesian calibration of the material models employed in the sensitivity analysis, keeping fixed at their nominal value those parameters that have been found to be non-influential in the sensitivity analyses. The calibration results considering both QoIs, Δ R and Δ L , are shown in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18.
Specifically, Figure 13 shows the prior probability distribution functions provided for the three most relevant parameters, A , B , and C of the JC model, and their posterior probability functions. Similarly, Figure 15 illustrates the same probability functions, now for the most relevant parameters of the ZA model: namely, C 0 , C 3 , C 5 , and n.
Based on the results of the calibration, we can make general comments on the calibrated models. In the case of the JC constitutive law, the calibration process has notably sharpened the probability density function of the three most significant parameters (see Figure 13 and Figure 14), eliminating a great part of the uncertainty linked to the variance in the prior probability distributions. Comparing the calibrated values of the parameters in the JC model obtained for the two QoIs analyzed (see Table 6), we note that both are similar. This suggests that the JC model is a good constitutive model for capturing the physics behind Taylor’s test. In turn, the posterior probability distributions for the ZA model barely reduce the variance of the priors (cf. Figure 15 and Figure 16). As a consequence, the calibration does not reduces significantly the uncertainty in the parameters. In addition, some of the calibrated parameters for the two QoIs under study have large disparities in their means. This is a consequence of the fact that, in our experiments, simulations carried out with the ZA model predict softer results, irrespective of the impact velocity and QoI observed. This fact, far from being a negative result, proves the potential of the method and illustrates that when the experimental and the simulation data are not in full agreement, the outcome of the calibration alerts of more uncertainties in the model and/or the data, or even the inability of the constitutive model to capture the physics of the problem.
Regarding the calibration of the Arrhenius model, it can be observed from Figure 17 and Figure 18 that the variance reduction in the posterior probability distribution of its parameters is not as strong as for the Johnson–Cook constitutive law, although it is still significant when compared to the Zerilli–Armstrong case. Something similar happens when analyzing the (mean) calibrated parameters when considering the two QoIs. Even if there is a good agreement among them, the calibrated parameter α 3 obtained for the two QoI is fairly different. A potential explanation can be found, again, in the complexity of the constitutive equation and the effects of ignoring a large number of the model parameters. While this conscious choice saves much computational cost, it causes the loss of information in the model behavior, that, either way, could be countered to a large extent with additional simulated data at a low computational cost.

5. Conclusions

Calibrating complex material models using experimental tests and simulations is a critical task in computational engineering. When done in combination with statistical inference, this process can yield accurate values for the unknown material parameters plus additional information about its scatter and intervals of confidence. For example, Gaussian processes provide a natural and powerful framework to combine physical and numerical tests to obtain probability distributions of the material parameters. To fully exploit the potential of this kind of analysis, however, a large number of data points is required, and the latter can be most effectively obtained by employing a meta-model.
In this work, we described an effective framework for calibrating complex material models based on the combination of meta-models built on top of anisotropic Radial Basis Functions, Global Sensitivity Analysis, and Gaussian processes. The integration of these techniques results in a robust and efficient workflow.
We have employed the framework described for the calibration of three extremely common, although complex material models. These are the Johnson–Cook, the Zerilli–Armstrong, and Arrhenius-type models, and are typically employed for the characterization of the elasto-visco-plastic response of metals under high strain rates, and possibly high temperature as well. The outcome of our analysis is two-fold. First, we are able, for each material model, to rank the sensitivity of an impact simulation with respect to each of the parameters involved. Second, the framework produces a probability distribution for all the calibrated parameters as a function of the available or generated data, tapping into previously built and extremely fast statistical tools to obtain them. Such a characterization is more complete than simple point estimates, often employed when fitting material models.
Let us conclude by noting that the procedures described in this work have applicability beyond materials calibration to, in principle, problems where model evaluations and experimental setups are costly.

Author Contributions

Conceptualization, I.R.; methodology, E.M. and I.R.; software, J.L.d.P., E.M., and I.R.; validation, J.L.d.P. and E.M.; data curation, J.L.d.P. and E.M.; writing—original draft preparation, J.L.d.P. and E.M.; writing—review and editing, I.R.; visualization, J.L.d.P. and E.M.; supervision, I.R.; funding acquisition, I.R. All authors have read and agreed to the published version of the manuscript.

Funding

JdP has been partially funded by the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme (Call Reference No: JTI-CS2-2017-CfP07-ENG-03-22) under grant agreement No 821044. IR would also like to acknowledge the funding received from the Spanish Ministry of Science, Innovation and Universities through project HEXAGB (RTI2018-098245-B- C21).

Acknowledgments

JdP has been partially funded by the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme (Call Reference No: JTI-CS2-2017-CfP07-ENG-03-22) under grant agreement No 821044. IR would also like to acknowledge the funding received from the Spanish Ministry of Science, Innovation and Universities through project HEXAGB (RTI2018-098245-B- C21).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iooss, B.; Lemaître, P. A review on global sensitivity analysis methods. In Uncertainty Management in Simulation-Optimization of Complex Systems; Springer: Berlin/Heidelberg, Germany, 2015; pp. 101–122. [Google Scholar]
  2. Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Saisana, M.; Tarantola, S. Global Sensitivity Analysis: The Primer; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  3. Box, G.; Draper, N. Empirical Model-Building and Response Surfaces; John Wiley & Sons: Hoboken, NJ, USA, 1987. [Google Scholar]
  4. Iooss, B.; Van Dorpe, F.; Devictor, N. Response surfaces and sensitivity analyses for an environmental model of dose calculations. Reliab. Eng. Syst. Saf. 2006, 91, 1241–1251. [Google Scholar] [CrossRef]
  5. Rohmer, J. Dynamic sensitivity analysis of long-running landslide models through basis set expansion and meta-modelling. Nat. Hazards 2014, 73, 5–22. [Google Scholar] [CrossRef]
  6. Todri, E.; Amenaghawon, A.; Del Val, I.; Leak, D.; Kontoravdi, C.; Kucherenko, S.; Shah, N. Global sensitivity analysis and meta-modeling of an ethanol production process. Chem. Eng. Sci. 2014, 114, 114–127. [Google Scholar] [CrossRef]
  7. Welch, W.; Buck, R.; Sacks, J.; Wynn, H.; Mitchell, T.; Morris, M. Screening, predicting, and computer experiments. Technometrics 1992, 34, 15–25. [Google Scholar] [CrossRef]
  8. Buhmann, M. Radial Basis Functions: Theory and Implementations; Cambridge University Press: Cambridge, UK, 2003; Volume 12. [Google Scholar]
  9. MacAllister, A.; Kohl, A.; Winer, E. Using High-Fidelity Meta-Models to Improve Performance of Small Dataset Trained Bayesian Networks. Expert Syst. Appl. 2019, 139, 112830. [Google Scholar] [CrossRef]
  10. Guttman, I.; Tiao, G. A Bayesian Approach to Some Best Population Problems; Technical Report; Wisconsin Univ-Madison: Madison, WI, USA, 1963. [Google Scholar]
  11. Aitchison, J.; Dunsmore, I. Statistical Prediction Analysis; CUP Archive: Cambridge, UK, 1980. [Google Scholar]
  12. Kennedy, M.; O’Hagan, A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B Stat. Methodol. 2001, 63, 425–464. [Google Scholar] [CrossRef]
  13. Higdon, D.; Kennedy, M.; Cavendish, J.; Cafeo, J.; Ryne, R. Combining Field Data and Computer Simulations for Calibration and Prediction. SIAM J. Sci. Comput. 2004, 26, 448–466. [Google Scholar] [CrossRef] [Green Version]
  14. O’Hagan, A. Bayesian analysis of computer code outputs: A tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 1290–1300. [Google Scholar] [CrossRef]
  15. Draper, D.; Pereira, A.; Prado, P.; Saltelli, A.; Cheal, R.; Eguilior, S.; Mendes, B.; Tarantola, S. Scenario and parametric uncertainty in GESAMAC: A methodological study in nuclear waste disposal risk assessment. Comput. Phys. Commun. 1999, 117, 142–155. [Google Scholar] [CrossRef]
  16. Janse, J.; Scheffer, M.; Lijklema, L.; Van Liere, L.; Sloot, J.; Mooij, W. Estimating the critical phosphorus loading of shallow lakes with the ecosystem model PCLake: Sensitivity, calibration and uncertainty. Ecol. Model. 2010, 221, 654–665. [Google Scholar] [CrossRef]
  17. Johnson, G. A constitutive model and data for materials subjected to large strains, high strain rates, and high temperatures. In Proceedings of the 7th International Symposium on Ballistics, Hague, The Netherlands, 19–21April 1983; pp. 541–547. [Google Scholar]
  18. Zerilli, F.; Armstrong, R. Dislocation-mechanics-based constitutive relations for material dynamics calculations. J. Appl. Phys. 1987, 61, 1816–1825. [Google Scholar] [CrossRef] [Green Version]
  19. Samantaray, D.; Mandal, S.; Bhaduri, A. A comparative study on Johnson–Cook, modified Zerilli–Armstrong and Arrhenius-type constitutive models to predict elevated temperature flow behaviour in modified 9Cr–1Mo steel. Comput. Mater. Sci. 2009, 47, 568–576. [Google Scholar] [CrossRef]
  20. Portillo, D.; del Pozo, D.; Rodríguez-Galán, D.; Segurado, J.; Romero, I. MUESLI—A Material UnivErSal LIbrary. Adv. Eng. Softw. 2017, 105, 1–8. [Google Scholar] [CrossRef]
  21. Taylor, G. The use of flat-ended projectiles for determining dynamic yield stress I. Theoretical considerations. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1948, 194, 289–299. [Google Scholar]
  22. Whiffin, A. The use of flat-ended projectiles for determining dynamic yield stress-II. Tests on various metallic materials. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1948, 194, 300–322. [Google Scholar]
  23. Menga, E.; Sánchez, M.; Romero, I. Anisotropic meta-models for computationally expensive simulations in nonlinear mechanics. Int. J. Numer. Methods Eng. 2019. [Google Scholar] [CrossRef]
  24. Hoff, P. A First Course in Bayesian Statistical Methods; Springer: Berlin/Heidelberg, Germany, 2009; Volume 580. [Google Scholar]
  25. Wernick, M.; Yang, Y.; Brankov, J.; Yourganov, G.; Strother, S. Machine learning in medical imaging. IEEE Signal Process. Mag. 2010, 27, 25–38. [Google Scholar] [CrossRef] [Green Version]
  26. Koenig, N.; Matarić, M. Robot life-long task learning from human demonstrations: A Bayesian approach. Auton. Robot. 2017, 41, 1173–1188. [Google Scholar] [CrossRef]
  27. Cofino, A.; Cano Trueba, R.; Sordo, C.; Gutiérrez Llorente, J. Bayesian networks for probabilistic weather prediction. In Proceedings of the 15th Eureopean Conference on Artificial Intelligence, ECAI’2002, Lyon, France, 21–26 July 2002. [Google Scholar]
  28. Rasmussen, C.; Williams, C.K. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  29. Carmassi, M.; Barbillon, P.; Keller, M.; Parent, E.; Chiodetti, M. Bayesian calibration of a numerical code for prediction. arXiv 2018, arXiv:1801.01810. [Google Scholar]
  30. Shrot, A.; Bäker, M. Determination of Johnson–Cook parameters from machining simulations. Comput. Mater. Sci. 2012, 52, 298–304. [Google Scholar] [CrossRef]
  31. Li, H.; He, L.; Zhao, G.; Zhang, L. Constitutive relationships of hot stamping boron steel B1500HS based on the modified Arrhenius and Johnson–Cook model. Mater. Sci. Eng. A 2013, 580, 330–348. [Google Scholar] [CrossRef]
  32. Banerjee, A.; Dhar, S.; Acharyya, S.; Datta, D.; Nayak, N. Determination of Johnson Cook material and failure model constants and numerical modelling of Charpy impact test of armour steel. Mater. Sci. Eng. A 2015, 640, 200–209. [Google Scholar] [CrossRef]
  33. Valentin, T.; Magain, P.; Quik, M.; Labibes, K.; Albertini, C. Validation of constitutive equations for steel. J. Phys. IV 1997, 7, C3-611. [Google Scholar] [CrossRef]
  34. Sobol’, I. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 1967, 7, 784–802. [Google Scholar] [CrossRef]
  35. Carmassi, M.; Barbillon, P.; Chiodetti, M.; Keller, M.; Parent, E. CaliCo: An R package for Bayesian calibration. arXiv 2018, arXiv:1808.01932. [Google Scholar]
  36. Craig, P.; Goldstein, M.; Rougier, J.; Seheult, A. Bayesian forecasting for complex systems using computer simulators. J. Am. Stat. Assoc. 2001, 96, 717–729. [Google Scholar] [CrossRef]
  37. Riddle, M.; Muehleisen, R. A guide to Bayesian calibration of building energy models. In Proceedings of the Building Simulation Conference, Atlanta, GA, USA, 10–12 September 2014. [Google Scholar]
Figure 1. Meta-modeling construction process.
Figure 1. Meta-modeling construction process.
Materials 13 04402 g001
Figure 2. Schematic of Taylor’s impact test.
Figure 2. Schematic of Taylor’s impact test.
Materials 13 04402 g002
Figure 3. Iterative process for a two-stage approach of screening and calibration of model parameters.
Figure 3. Iterative process for a two-stage approach of screening and calibration of model parameters.
Materials 13 04402 g003
Figure 4. Linear interpolation of meta-model predictions of Δ R for the Johnson–Cook constitutive relation. Each piecewise linear interpolation connects predictions with the same model parameters.
Figure 4. Linear interpolation of meta-model predictions of Δ R for the Johnson–Cook constitutive relation. Each piecewise linear interpolation connects predictions with the same model parameters.
Materials 13 04402 g004
Figure 5. Linear interpolation of meta-model predictions of Δ R for the Zerilli–Armstrong constitutive relation.
Figure 5. Linear interpolation of meta-model predictions of Δ R for the Zerilli–Armstrong constitutive relation.
Materials 13 04402 g005
Figure 6. Linear interpolation of meta-model predictions of Δ R for the Arrhenius-type constitutive relation.
Figure 6. Linear interpolation of meta-model predictions of Δ R for the Arrhenius-type constitutive relation.
Materials 13 04402 g006
Figure 7. Global Sensitivity Analysis (GSA) results for the Johnson–Cook (JC) model considering Δ R and Δ L at 200 and 320 m / s .
Figure 7. Global Sensitivity Analysis (GSA) results for the Johnson–Cook (JC) model considering Δ R and Δ L at 200 and 320 m / s .
Materials 13 04402 g007
Figure 8. GSA results for the Zerilli–Armstrong (ZA) model considering Δ R and Δ L at 200 and 320 m / s .
Figure 8. GSA results for the Zerilli–Armstrong (ZA) model considering Δ R and Δ L at 200 and 320 m / s .
Materials 13 04402 g008
Figure 9. GSA results for the Arrhenius-type model considering Δ R and Δ L at 200 and 320 m / s .
Figure 9. GSA results for the Arrhenius-type model considering Δ R and Δ L at 200 and 320 m / s .
Materials 13 04402 g009
Figure 10. Comparison of meta-models interpolated results and FEM simulations, considering Δ R for the Johnson–Cook model.
Figure 10. Comparison of meta-models interpolated results and FEM simulations, considering Δ R for the Johnson–Cook model.
Materials 13 04402 g010
Figure 11. Comparison of meta-models interpolated results and FEM simulation results, considering Δ R for the Zerilli–Armstrong model.
Figure 11. Comparison of meta-models interpolated results and FEM simulation results, considering Δ R for the Zerilli–Armstrong model.
Materials 13 04402 g011
Figure 12. Comparison of meta-models interpolated results and FEM simulation results, considering Δ R for the Arrhenius-type model.
Figure 12. Comparison of meta-models interpolated results and FEM simulation results, considering Δ R for the Arrhenius-type model.
Materials 13 04402 g012
Figure 13. Prior vs. posterior probability distribution functions of parameters A θ 1 , B θ 2 , C θ 3 , in the JC model considering Δ R as the QoI.
Figure 13. Prior vs. posterior probability distribution functions of parameters A θ 1 , B θ 2 , C θ 3 , in the JC model considering Δ R as the QoI.
Materials 13 04402 g013

0.480.480.48
(a)(b)(c)
Figure 14. Prior vs. posterior probability distribution functions of parameters A θ 1 , B θ 2 , C θ 3 , in the JC model considering Δ L as the QoI.
Figure 14. Prior vs. posterior probability distribution functions of parameters A θ 1 , B θ 2 , C θ 3 , in the JC model considering Δ L as the QoI.
Materials 13 04402 g014

0.480.480.58
(a)(b)(c)
Figure 15. Prior vs. posterior parameter probability distributions for C 0 θ 1 , C 3 θ 2 , C 5 θ 3 , n θ 4 , of the ZA model considering Δ R as the QoI.
Figure 15. Prior vs. posterior parameter probability distributions for C 0 θ 1 , C 3 θ 2 , C 5 θ 3 , n θ 4 , of the ZA model considering Δ R as the QoI.
Materials 13 04402 g015

0.480.480.480.48
(a)(b)(c)(d)
Figure 16. Prior vs. posterior parameter probability distributions for C 0 θ 1 , C 3 θ 2 , C 5 θ 3 , n θ 4 , of the ZA model considering Δ L as the QoI.
Figure 16. Prior vs. posterior parameter probability distributions for C 0 θ 1 , C 3 θ 2 , C 5 θ 3 , n θ 4 , of the ZA model considering Δ L as the QoI.
Materials 13 04402 g016

0.480.480.480.48
(a) h(b) h(c) h(d) h
Figure 17. Prior vs. posterior parameter probability distributions for parameters A 2 θ 1 , A 3 θ 2 , α 3 θ 3 , n θ 4 of the Arrhenius-type model considering Δ R as the QoI.
Figure 17. Prior vs. posterior parameter probability distributions for parameters A 2 θ 1 , A 3 θ 2 , α 3 θ 3 , n θ 4 of the Arrhenius-type model considering Δ R as the QoI.
Materials 13 04402 g017

0.480.480.480.48
(a)(b)(c)(d)
Figure 18. Prior vs. posterior parameter probability distributions for parameters A 2 θ 1 , A 3 θ 2 , α 3 θ 3 , n θ 4 of the Arrhenius-type model considering Δ L as the QoI.
Figure 18. Prior vs. posterior parameter probability distributions for parameters A 2 θ 1 , A 3 θ 2 , α 3 θ 3 , n θ 4 of the Arrhenius-type model considering Δ L as the QoI.
Materials 13 04402 g018

0.480.480.480.48
(a) h(b) h(c) h(d) h
Table 1. Prior probability distributions.
Table 1. Prior probability distributions.
Term/ParameterProbability Distribution Function
θ N ( μ , 1 ) JC/ARR or N ( μ , 10 ) ZA (see Table 2)
λ 2 Γ ( 1 , 0.1 )
σ δ 2 Γ ( 1 , 0.1 )
ψ δ U ( 0 , 1 )
Table 2. Mean values for the parameter distributions according to literature [19,33].
Table 2. Mean values for the parameter distributions according to literature [19,33].
Material ModelMean Values
Johnson–Cook A = 113 MPa , B = 211 MPa , C = 0.073 , n = 0.218 , m = 0.818
Zerilli–Armstrong C 0 = 707.2 MPa , C 1 = 575 MPa , C 3 = 0.00698 K 1 ,
C 4 = 0.00032 K 1 , C 5 = 637.5 MPa , n = 0.41
Arrhenius-type Q 0 = 412.31 , Q 1 = 510.82 , Q 2 = 1873.4 , Q 3 = 1872.4
A 0 = 36.402 , A 1 = 68.301 , A 2 = 254.32 , A 3 = 255.57
α 0 = 0.009481 , α 1 = 0.003841 , α 2 = 0.012971 , α 3 = 0.025892 ,
n = 5.2248
Table 3. Model parameters accounting for 90% or more of the Quantity of Interest (QoI) variance.
Table 3. Model parameters accounting for 90% or more of the Quantity of Interest (QoI) variance.
Material ModelSignificant Parameters
Johnson–Cook A , B , C
Zerilli–Armstrong C 0 , C 3 , C 5 , n
Arrhenius-type A 2 , A 3 , α 3 , n
Table 4. Errors in the meta-model predictions of Δ R compared with full FE simulations.
Table 4. Errors in the meta-model predictions of Δ R compared with full FE simulations.
ModelMean Relative ErrorMaximum Relative Error
Jonhson–Cook 6.2 × 10 3 1.2 × 10 2
Zerilli–Armstrong 8.1 × 10 3 3.2 × 10 2
Arrhenius-type 1.1 × 10 2 6.7 × 10 2
Table 5. Errors in the meta-model predictions of Δ L compared with full FE simulations.
Table 5. Errors in the meta-model predictions of Δ L compared with full FE simulations.
ModelMean Relative ErrorMaximum Relative Error
Jonhson–Cook 6.9 × 10 3 2.3 × 10 2
Zerilli–Armstrong 9.4 × 10 3 3.6 × 10 2
Arrhenius-type 7.8 × 10 3 3.0 × 10 2
Table 6. A posteriori mean values obtained for each parameter, considering both QoIs, Δ R and Δ L , for the calibration of the models.
Table 6. A posteriori mean values obtained for each parameter, considering both QoIs, Δ R and Δ L , for the calibration of the models.
Material ModelMean Values Δ R Mean Values Δ L
Johnson–Cook A = 112.7 MPa , B = 211.8 MPa , C = 0.065 A = 112.4 MPa , B = 211.3 MPa , C = 0.073
Zerilli–Armstrong C 0 = 704.6 MPa , C 3 = 0.00392 K 1 , C 0 = 702.8 MPa , C 3 = 1.65 K 1 ,
C 5 = 638.3 MPa , n = 1.65 C 5 = 636.1 MPa , n = 1.02
Arrhenius-type A 2 = 255.0 , A 3 = 254.8 A 2 = 254.0 , A 3 = 255.9
α 3 = 0.077 , n = 4.9 α 3 = 0.59 , n = 5.54

Share and Cite

MDPI and ACS Style

de Pablos, J.L.; Menga, E.; Romero, I. A Methodology for the Statistical Calibration of Complex Constitutive Material Models: Application to Temperature-Dependent Elasto-Visco-Plastic Materials. Materials 2020, 13, 4402. https://doi.org/10.3390/ma13194402

AMA Style

de Pablos JL, Menga E, Romero I. A Methodology for the Statistical Calibration of Complex Constitutive Material Models: Application to Temperature-Dependent Elasto-Visco-Plastic Materials. Materials. 2020; 13(19):4402. https://doi.org/10.3390/ma13194402

Chicago/Turabian Style

de Pablos, Juan Luis, Edoardo Menga, and Ignacio Romero. 2020. "A Methodology for the Statistical Calibration of Complex Constitutive Material Models: Application to Temperature-Dependent Elasto-Visco-Plastic Materials" Materials 13, no. 19: 4402. https://doi.org/10.3390/ma13194402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop