Coefficient Extraction of SAC305 Solder Constitutive Equations Using Equation-Informed Neural Networks

Equation-Informed Neural Networks (EINNs) are developed as an efficient method for extracting the coefficients of constitutive equations. Subsequently, numerical Bayesian Inference (BI) iterations were applied to estimate the distribution of these coefficients, thereby further refining them. We could generate coefficients optimally aligned with the targeted application scenario by carefully adjusting pre-processing mapping parameters and identifying dataset preferences. Leveraging graphical representation techniques, the EINNs formulation is implemented in temperature- and strain-rate-dependent hyperbolic Garofalo, Anand, and Chaboche constitutive models to extract the corresponding coefficients for lead-free SAC305 solder material. The performance of the EINNs-based extracted coefficients, obtained from experimental results of SAC305 solder material, is comparable to existing studies. The methodology offers the dual advantage of providing the coefficients’ value and distribution against the training dataset.


Introduction
Proper material constitutive models and related coefficients are fundamental for reliable finite element predictions, encompassing the performance prediction model [1], the manufacturing process [2,3], and the reliability prediction models. Non-linear material properties, based on the temperature-and strain-rate-dependent material models, are often necessary for modeling critical sections of electronic packaging [4,5] and further influence the accuracy and predictability of the surrogate AI models [6][7][8].
Solder, a key component in electronic packaging, is often associated with potential fatigue failures. Wilde et al. conducted a study on the rate-dependent constitutive relationship of Pb-rich material [9], resulting in extracting Anand-based coefficients and identifying kinematic hardening, also known as the Bauschinger effect. To gain a better understanding of the creep characteristics of Pb-free solders, Xiao and Armstrong [10] performed tensile tests on both eutectic PbSn and Sn3.9Ag0.6Cu solder. Their findings revealed substantial microstructural alterations in the Sn3.9Ag0.6Cu with significantly lower absolute creep rates than the PbSn eutectic. The creep measurement data were successfully fitted into the Garofalo model [11], and the corresponding Garofalo coefficient was extracted.
Furthermore, Motalab et al. [12,13] conducted creep tests under meticulous control of the microstructure of the SAC305 solder without an oxidized surface, yielding a set of nine parameters for the Anand model. Basit et al. [14] utilized the Anand constitutive model with the extracted coefficients for solder joint lifetime prediction. The Chaboche material model [15], which considers the Bauschinger effect, was applied by Xie and This paper is organized as follows: the "Theory" section provides an introduction to the framework of Equation-Informed Neural Networks (EINNs) and the numerical Bayesian Inference (BI) method. The subsequent section, "EINN Formulation", presents the con- This neural network can be incrementally trained using input and output data pairs, facilitating the simultaneous approximation of coefficient α k . Theoretically, the steepest descent algorithm of the neural network backpropagation bolsters the computation efficiency and fosters the selective learning of data pairs. The final coefficients are obtained by the post-processing conversion. Utilizing the coefficients obtained by EINNs as initial values, Bayesian Inference (BI) is applied to obtain the distribution of the coefficients against the training datasets and further enhance the accuracy of coefficient extraction. This paper is organized as follows: the "Theory" section provides an introduction to the framework of Equation-Informed Neural Networks (EINNs) and the numerical Bayesian Inference (BI) method. The subsequent section, "EINN Formulation", presents the conversion process of constitutive equations from their conventional mathematical forms to their EINN equivalents, complete with pre-processing mapping and post-processing functions. In the "Applications" section, we apply the EINN formulation to the coefficient extraction of the material constitutive equation pertinent to Pb-free SAC305 solder joints. Detailed discussions and numerical results pertaining to the EINN formulations of the Chaboche, hyperbolic Garofalo, and Anand material models are also included. The paper concludes with a concise summary of our findings.

The Framework of Equation-Informed Neural Networks (EINNs)
Assume a constitutive equation is given by the function: where y i , x j , and α k are vectors in real space with dimensions i, j, and k, respectively. The x j and y i represent the input and output of the functions, while α k refers to the coefficients. Design pre-processing mapping functions: M x x j = X j and M y ( which serve to effectively modify the domains of x j and y i to optimize the precision of coefficient extraction. Consequently, a new function can be formulated as Y i = F X j , A k . Meanwhile, the corresponding neural network representations of Y i are formulated, and the coefficient A k is assigned as the weighting. The learning process of the neural network involves continuous adjustment of these weights or coefficients. These adjustments can be computed for each known data pair using steepest-descent-based backpropagation as A new . Since these updates are independent of each data pair, the computationally expensive matrix multiplication and inversion inherent in the least squares-based approaches can be avoided. Furthermore, incorporating ratios into the adjustments allows for user emphasis on specific data pairs. This can be implemented as A new , where l is the coefficient adjustment from each data pair and ∑ l r l = 1. Following several learning iterations with satisfactory accuracy, the coefficient A k of the constitutive equation can be obtained. However, due to the pre-processing mapping function (2) being applied, counteractions are required to reverse its effect. Therefore, we define the post-processing conversion functions as follows.
Through the combined application of pre-processing mapping functions and postprocessing conversion of coefficients, the EINN framework gains an additional degree of freedom, bolstering the accuracy of coefficient extraction. Additionally, the steepest descent method offers a unique opportunity to prioritize specific data pairs while maintaining high computational efficiency.

The Numerical Bayesian Inference (BI) Iteration
We define the mean square error (MSE) function of Equation (1) with respect to the coefficients α k , as where x l j and y l t,i denote the input and ground truth of the l-th datapair, respectively. Assume that the distribution of the data pairs y (l) t,i and x (l) j are normal, and so is the error function (α k ), denoted as (α k ) ∼ N(µ, τ). Because the parameter τ cannot be negative, we assume it follows the gamma distribution, so that τ ∼ G(a 0 , b 0 ), where a 0 and b 0 are the gamma distribution parameters of τ. Moreover, assume that all the coefficients follow the normal distribution, say α k ∼ N(µ k , τ k ), and µ k and τ k are the average and precision, respectively. The posterior distribution after the BI remains normal distribution. In practice, we set µ k equal to α k .
Consequently, the probabilities of the coefficient τ and α k can be derived as [27]. The posterior of the τ distribution can be updated by the gamma-gamma conjugate: As Equation (1) is not always a linear function, the posterior of coefficient α k cannot always be computed by conjugate. Therefore, under the assumption that the value of ∆α k is relatively small, a numerical integration approach is applied: where α (0) k is the minimal value of α k and n is the number of the equal split between the assigned maximum and minimum α k with a total of N splits. The posterior can then be obtained using normal distribution approximation.
We employ the Markov Chain Monte Carlo (MCMC) method to compute large hierarchical models requiring integration over many parameters. By applying the Gibbs sampling, the τ distribution parameters a 0 and b 0 are first updated through the conjugate (Equation (5)), and a new τ value will be sampled from the gamma distribution. Each α k will be updated sequenently, and the new value will be accepted. Following thousands of iterations, every α k exhibits a normal distribution. The mean value of this distribution is computed and assigned as the updated value for α k .

EINN Formulation
This section outlines the development of Equation-Informed Neural Network (EINN) formulations for the hyperbolic Garofalo, nine-parameter Anand, and Chaboche models, including pre-processing mapping and post-processing coefficient functions.

Hyperbolic Garofalo Model
The conventional hyperbolic Garofalo constitutive equation can be written as: where . ε p , σ, Q, R, and T represent the plastic strain rate, stress, activation energy, gas constant, and temperature, respectively. C 1 , C 2 , and C 3 are the coefficients that need to be extracted from the experimental data.
We introduce e = . ε p ·e Q RT and accumulate the data pairs of {e} and {σ} from the experimental results. In order to proportional convert the original data to the [a, b + a] domain, the pre-processing matching functions are defined as follows: where σ m and ∆σ represent the minimal and maximum different values of set {σ}, and e and ∆e correspond to set {e}. Parameters a and b are parts of pre-processing mapping, and a = 0.001 and b = 1 are assigned for this case. The values after the pre-processing are defined as x and y, respectively. Subsequently, a new function can be derived as: The corresponding neural network can be defined in Figure 2. The definition of the neurons is given in Table 1. models, including pre-processing mapping and post-processing coefficient functions.

Hyperbolic Garofalo Model
The conventional hyperbolic Garofalo constitutive equation can be written as: where , , , , and represent the plastic strain rate, stress, activation energy, gas constant, and temperature, respectively. , , and are the coefficients that need to be extracted from the experimental data.
We introduce = • and accumulate the data pairs of and from the experimental results. In order to proportional convert the original data to the , + domain, the pre-processing matching functions are defined as follows: where and Δ represent the minimal and maximum different values of set , and and Δ correspond to set . Parameters and are parts of pre-processing mapping, and = 0.001 and = 1 are assigned for this case. The values after the pre-processing are defined as and , respectively. Subsequently, a new function can be derived as: The corresponding neural network can be defined in Figure 2. The definition of the neurons is given in Table 1.

Anand Model
Anand et al. [28] proposed a set of viscoplastic constitutive equations for the ratedependent deformation of metals. Recently, the Anand model has been extensively applied to microelectronic solders exhibiting large viscoplastic deformations. In addition to the activation energy, there are eight coefficients in the Anand model. A two-step approach is commonly employed to extract these eight coefficients [9,12,13].
The governing equation for the first step of the Anand model, including the ultimate tensile stress ( * ), plastic strain rate ( ), activation energy ( ), and temperature ( ), is

Neuron
Net Value Activation Accordingly, the post-processing conversion of the coefficients can be approximated as C 1 = C∆e·r n 2 , C 2 = A·b/∆σ, and C 3 = n, where r 2 = Ab ∆σ − Aa .

Anand Model
Anand et al. [28] proposed a set of viscoplastic constitutive equations for the ratedependent deformation of metals. Recently, the Anand model has been extensively applied to microelectronic solders exhibiting large viscoplastic deformations. In addition to the activation energy, there are eight coefficients in the Anand model. A two-step approach is commonly employed to extract these eight coefficients [9,12,13].
The governing equation for the first step of the Anand model, including the ultimate tensile stress (σ * ), plastic strain rate ( . ε p ), activation energy (Q), and temperature (T), is expressed in Equation (10).ŝ, ξ, A, n, and m are the coefficients that need to be extracted.
Utilizing the same method as in the previous section, we assume e 0 = . ε p ·e Q RT . Since the value of the strain rate is relatively small compared to other input parameters, a scaling factor R is applied, such that e = e 0 R . For consistency within this paper, the same activation function as in the previous section is assumed. The data pair of {e} and {σ * } is collected Materials 2023, 16, 4922 6 of 16 from the Motalab et al. [12,13]. An additional y and x are introduced to represent the output and input parameters, and the pre-processing mapping functions are defined as where e m and ∆e are the minimal and maximum difference among set {e}, and so are σ * m and ∆σ * in {σ * }. a e , b e , a σ , and b σ are the mapping coefficients. By defining β =ˆs ξ , the new function can be written as Based on Equation (12), the EINN representation can be formulated as Figure 3. This network's definitions are listed in Table 2 Utilizing the same method as in the previous section, we assume = • . Since the value of the strain rate is relatively small compared to other input parameters, a scaling factor is applied, such that = . For consistency within this paper, the same activation function as in the previous section is assumed. The data pair of and * is collected from the Motalab et al. [12,13]. An additional and are introduced to represent the output and input parameters, and the pre-processing mapping functions are defined as where and Δ are the minimal and maximum difference among set {e}, and so are * and Δ * in * . , , , and are the mapping coefficients. By defining = ̂ , the new function can be written as Based on Equation (12), the EINN representation can be formulated as Figure 3. This network's definitions are listed in Table 2.  The governing equation of the second step of the Anand model is listed in (14), and , , and ℎ are the three remaining coefficients. The parameter is defined in (15), and is defined as the smallest positive real number to keep 1.

Neuron Net Value Activation
the post-processing of the coefficients can be written as follows: where avg(x) and avg(y) are the averges of {e} and {σ * }. The governing equation of the second step of the Anand model is listed in (14), and s 0 , a, and h 0 are the three remaining coefficients. The parameter c is defined in (15), and ξ is defined as the smallest positive real number to keep c < 1.
We assume that , y = (σ * − σ), l = σ * , and z = ε p , and the pre-processing mapping functions are defined as By assuming 1 − a = a , the new function can be written as Based on Equation (17), the EINN representation can be formulated as Figure 4. This network's definitions are listed in Table 3. Moreover, the post-processing of coefficients can be derived as where We assume that = sinh , = ( * − ) , = * , and = , and the pre-processing mapping functions are defined as = + , = + , = + and = + , (16) By assuming 1 − = ′, the new function can be written as Based on Equation (17), the EINN representation can be formulated as Figure 4. This network's definitions are listed in Table 3. Moreover, the post-processing of coefficients can be derived as where = , = , = , and = .

Chaboche Model
The Chaboche model [15,29] is often applied for presenting the metallic material with the Bauschinger effect under cyclic loading. The original function can be written as where and are the back tensile stress and the plastic strain. is the initial yielding stress, and and are the fitting coefficients. To simplify the equation, we substitute and / as . Let = , = , as the parameters, with the pre-processing mapping functions:

Neuron
Net Value Activation

Chaboche Model
The Chaboche model [15,29] is often applied for presenting the metallic material with the Bauschinger effect under cyclic loading. The original function can be written as where α and ε p are the back tensile stress and the plastic strain. σ 0 is the initial yielding stress, and C and γ are the fitting coefficients. To simplify the equation, we substitute and C/γ as β. Let x = ε p , y = α, as the parameters, with the pre-processing mapping functions: where ε p,m and ∆ε p are the minimal and maximum differences among set {ε p }, and so are α m and ∆α in {α}, and s = σ 0 . Hence, the new function can be re-written as with the EINN formulation shown in Figure 5 and the neuron definition listed in Table 4.
where , and Δ are the minimal and maximum differences among set { }, and so are and Δ in , and = . Hence, the new function can be re-written as with the EINN formulation shown in Figure 5 and the neuron definition listed in Table 4.

Applications
Building on the EINN formulation and Bayesian Inference (BI) iteration described in the preceding section, this chapter discusses the extraction of coefficients from the hyperbolic Garofalo, nine-parameter Anand, and Chaboche models for the SAC305 solder material.

Hyperbolic Garofalo Model
The experimental dataset is drawn from Xiao and Armstrong [10]. To determine the coefficient in Equation (9), we employ a grid search combined with a bisection optimization technique, whereas the EINN structure for coefficients and is addressed using standard backpropagation. To emphasize coefficient extraction for low temperatures (both 318 and 353 K) and low strain rates, ratios are assigned to the datapairs, as shown in Table 5. Table 5 also lists the input (plastic strain) and output (stress) of the EINN learning. Utilizing the post-processing conversion formula, the EINN coefficients, , , and , are obtained and presented in the middle column of Table 6. The hyperbolic model, when compared to the experimental data, is depicted in Figure 6. The data at 388 K exhibits a more significant difference than the others, primarily due to the ratio setting outlined in Table 5.
A total of 1000 Bayesian Inference interactions were performed to obtain the distribution of the extracted coefficients. The distributions are displayed in Figure 7, represented as the ratio of each coefficient value to the average, and are expanded by the precision of the error function. As denoted by the dashed lines in Figure 7, which signify a 5% difference, stable distributions of coefficients and are observed, while the large variation in is attributed to the ratio setting, which induces a higher discrepancy among the 388 K data.

Neuron
Net Value Activation Furthermore, the post-processing of coefficients can be derived as

Applications
Building on the EINN formulation and Bayesian Inference (BI) iteration described in the preceding section, this chapter discusses the extraction of coefficients from the hyperbolic Garofalo, nine-parameter Anand, and Chaboche models for the SAC305 solder material.

Hyperbolic Garofalo Model
The experimental dataset is drawn from Xiao and Armstrong [10]. To determine the coefficient C in Equation (9), we employ a grid search combined with a bisection optimization technique, whereas the EINN structure for coefficients A and n is addressed using standard backpropagation. To emphasize coefficient extraction for low temperatures (both 318 and 353 K) and low strain rates, ratios are assigned to the datapairs, as shown in Table 5. Table 5 also lists the input (plastic strain) and output (stress) of the EINN learning. Utilizing the post-processing conversion formula, the EINN coefficients, C 1 , C 2 , and C 3 , are obtained and presented in the middle column of Table 6. The hyperbolic model, when compared to the experimental data, is depicted in Figure 6. The data at 388 K exhibits a more significant difference than the others, primarily due to the ratio setting outlined in Table 5.    A total of 1000 Bayesian Inference interactions were performed to obtain the distribution of the extracted coefficients. The distributions are displayed in Figure 7, represented as the ratio of each coefficient value to the average, and are expanded by the precision τ of the error function. As denoted by the dashed lines in Figure 7, which signify a 5% difference, stable distributions of coefficients C 2 and C 3 are observed, while the large variation in C 1 is attributed to the ratio setting, which induces a higher discrepancy among the 388 K data.   The coefficient extraction of the hyperbolic Garofalo constitutive equation highlights the flexibility of the EINN framework, as it allows for assigning ratios to data pairs to prioritize specific data. The fitting accuracy of the EINN results demonstrates a significant improvement compared to the original reports [10] as indicated by the mean square error (MSE) of Table 6, followed by Equation (4). Although the distribution of the coefficients demonstrate a small fraction of the outliers from BI integration as Figure 7, both The coefficient extraction of the hyperbolic Garofalo constitutive equation highlights the flexibility of the EINN framework, as it allows for assigning ratios to data pairs to prioritize specific data. The fitting accuracy of the EINN results demonstrates a significant improvement compared to the original reports [10] as indicated by the mean square error (MSE) of Table 6, followed by Equation (4). Although the distribution of the C 1 coefficients demonstrate a small fraction of the outliers from BI integration as Figure 7, both C 2 and C 3 show statistical difference within ±5% difference. Over 1000 iterations, only 58 instances of C 1 shows more than ±5% difference of the average value. Consequently, a robust set of coefficients for the hyperbolic Garofaolo constitutive model is achieved.

Anand Model
In this section, the Anand constitutive model coefficients extraction is implemented for the lead-free SAC305 solder. The same activation energy as in the previous section is applied for the sake of research consistency. To extract the remaining eight coefficients of the Anand constitutive model, the first step involves utilizing temperature and strain rate-dependent ultimate tensile stresses to determine the initial four coefficients. Subsequently, the second step defines the remaining parameters based on temperature and strain rate-dependent stress and plastic strain.
The experimental data are sourced from Motalab et al. [12]. The EINN formulation, following Equation (12), is applied with the pre-processing mapping coefficients a e , b e , a σ , and b σ (Equation (13)) which are 0.8, 0.15, 0.9, and 0.1. It is vital to note that the selection of these mapping coefficients depends on the numerical characteristics of the dataset, and it is essential for preventing numerical errors during the backpropagationbased machine learning of the EINN formulation.
During the learning phase of the EINN formulation, the coefficients n * , m * , A * , and β of Equation (12) and Figure 3 are constrained to be positive. A grid search technique is employed to identify optimal initial values concerning the experimental data.
Furthermore, learning ratios are implemented to emphasize the learning preference for low strain rates and temperatures close to the working temperature of electronic components. After hundreds of iterations, the EINN coefficients are reported in Table 7. The MSE values indicate that the coefficients obtained from the EINN formulation exhibit similar accuracy to those obtained using conventional methods. The obtained step 1 Anand model is plotted in Figure 8. MSE values indicate that the coefficients obtained from the EINN formulation exhibit similar accuracy to those obtained using conventional methods. The obtained step 1 Anand model is plotted in Figure 8.
The EINN formulation coefficients serve as initial inputs for Bayesian Inference (BI) to analyze the statistical distribution of the coefficients. Figure 9 illustrates the distribution of the coefficients, with dashed lines indicating differences within 5%. Due to its low value, the coefficient was not examined. Both coefficients and exhibit distribution within 5% difference. Out of 1500 values, only 61 cases of coefficient m exceed 5% difference, which can be attributed to the preference settings during the EINN learning process. The average coefficients obtained from BI are presented in the last column of Table 7 and are utilized for the subsequent coefficient extraction step in the Anand model.    The EINN formulation coefficients serve as initial inputs for Bayesian Inference (BI) to analyze the statistical distribution of the coefficients. Figure 9 illustrates the distribution of the coefficients, with dashed lines indicating differences within ±5%. Due to its low value, the coefficient n was not examined. Both coefficients A and β exhibit distribution within ±5% difference. Out of 1500 values, only 61 cases of coefficient m exceed ±5% difference, which can be attributed to the preference settings during the EINN learning process. The average coefficients obtained from BI are presented in the last column of Table 7 and are utilized for the subsequent coefficient extraction step in the Anand model. The temperature and strain rate dependent stress-strain curves are obtained from Motalab [12]. The EINN formulation of the step 2 Anand model, as indicated in Equation (17) and Figure 4, is applied with the pre-processing mapping parameters shown in Table  8, based on Equation (16), while in the EINN learning procedure, the values of and ℎ are forced to be positive. A grid search technique is applied to define the optimal initial coefficients. The learning ratios are implemented to emphasize the learning preference for low strain rates and temperatures close to the working temperature of electronic components, following the coefficient extraction strategy of Motalab et al. [12]. With Equation (18), the optimized coefficients can be obtained, as listed in Table 9, and the stress-strain curves at different strain rates from the Anand model are plotted against the experiment [12], as shown in Figure 10.
The dataset with high preference is applied to the BI iteration to mitigate the large coefficient shifting. Figure 11 plots the MSEs of EINNs and EINNs with BI against the Anand coefficient obtained by Motalab et al. [12], under different temperatures and strain rates. By adjusting the ratio of EINN network learning, the coefficient extraction can be fine-tuned to perform better in the room to the working temperature at a low strain rate, as indicated in Figure 11.  The temperature and strain rate dependent stress-strain curves are obtained from Motalab [12]. The EINN formulation of the step 2 Anand model, as indicated in Equation (17) and Figure 4, is applied with the pre-processing mapping parameters shown in Table 8, based on Equation (16), while in the EINN learning procedure, the values of s 0 and h 0 are forced to be positive. A grid search technique is applied to define the optimal initial coefficients. The learning ratios are implemented to emphasize the learning preference for low strain rates and temperatures close to the working temperature of electronic components, following the coefficient extraction strategy of Motalab et al. [12]. With Equation (18), the optimized coefficients can be obtained, as listed in Table 9, and the stress-strain curves at different strain rates from the Anand model are plotted against the experiment [12], as shown in Figure 10.   curves at different strain rates from the Anand model are plotted against the experiment [12], as shown in Figure 10. The dataset with high preference is applied to the BI iteration to mitigate the large coefficient shifting. Figure 11 plots the MSEs of EINNs and EINNs with BI against the Anand coefficient obtained by Motalab et al. [12], under different temperatures and strain rates. By adjusting the ratio of EINN network learning, the coefficient extraction can be fine-tuned to perform better in the room to the working temperature at a low strain rate, as indicated in Figure 11.   The dataset with high preference is applied to the BI iteration to mitigate the large coefficient shifting. Figure 11 plots the MSEs of EINNs and EINNs with BI against the Anand coefficient obtained by Motalab et al. [12], under different temperatures and strain rates. By adjusting the ratio of EINN network learning, the coefficient extraction can be fine-tuned to perform better in the room to the working temperature at a low strain rate, as indicated in Figure 11.

Chaboche Model
To study the lifetime of the ball-grid-array-type of advanced electronic packagi the Chaboche material model is often applied [5,8]. The Chaboche model and its coe cients can be extracted from the temperature-dependent stress-strain curves by a giv strain rate. Unlike the previous sectors, this section investigates the extraction of Chabo coefficients from the Anand model.

Chaboche Model
To study the lifetime of the ball-grid-array-type of advanced electronic packaging, the Chaboche material model is often applied [5,8]. The Chaboche model and its coefficients can be extracted from the temperature-dependent stress-strain curves by a given strain rate.
Unlike the previous sectors, this section investigates the extraction of Chaboche coefficients from the Anand model .  The Anand coefficients from Tables 6 and 7, adjusted via Bayesian Inference (BI), are utilized to generate inputs for the Chaboche model. A strain rate of 10 −5 (1/s) is maintained, given that the Anand coefficients have been optimized for lower strain rates, as demonstrated in the previous section. Stress-strain curves can be generated by the Anand model (as Equations (14) and (15)) for each temperature point, including −40 • C, −20 • C, 40 • C, 80 • C, and 122 • C.
The temperature-dependent stress-strain data serve as the training datasets. With the pre-processing mapping established by Equation (20), we apply the EINN formulation for the Chaboche model as Equation (21). Following this, the steepest-descent coefficient optimization is applied to the EINN formulation (as illustrated in Figure 5) with the neural definitions outlined in Table 4. The post-processing of the coefficients Equation (22) allows for the acquisition of Chaboche coefficients at various temperatures. The resultant data are documented in Table 10, with the mean square errors (MSE) compared to the input dataset. The coefficients derived from the EINN formulation are subsequently incorporated into Bayesian Inference (BI) iterations for the temperature-dependent Chaboche model. Figure 12 delineates the distribution of coefficients σ 0 , C, and γ across different temperatures, magnified by the precision τ of the error function. The vertical axes in this figure represent the ratio of the coefficient value obtained at each BI iteration to the averaged value. Table 11 contains the averaged coefficient post-BI. While variations in all coefficients lie within a 5% difference, a larger variety, coupled with a lower MSE, as listed in Tables 10 and 11, is evident at higher temperatures. This suggests a reduced coefficient sensitivity at these elevated temperatures. By introducing Young's modulus obtained by linear extrapolation from the experiment [12], the temperature-dependent stress-strain curves are plotted in Figure 13.     While variations in all coefficients lie within a ±5% difference, a larger variety, coupled with a lower MSE, as listed in Tables 10 and 11, is evident at higher temperatures. This suggests a reduced coefficient sensitivity at these elevated temperatures. By introducing Young's modulus obtained by linear extrapolation from the experiment [12], the temperature-dependent stress-strain curves are plotted in Figure 13.  Figure 13. The temperature-dependent stress-stress curves from Chaboche model using the coefficients in Table 11.

Conclusions
In this study, we developed the concept of Equation-Informed Neural Networks (EINNs) as an efficient method for extracting the coefficients of constitutive equations. Subsequently, the MCMC with numerical Bayesian Inference (BI) iterations was applied to estimate the distribution of these coefficients, thereby further refining them.
The EINN formulation was derived by leveraging graphical representation techniques to convert the mathematical form of constitutive equations into an equivalent EINN format. By carefully adjusting pre-processing mapping parameters and identifying dataset preferences, we could generate coefficients optimally aligned with the targeted application scenario.
The EINN formulation has been successfully applied to the hyperbolic Garofalo, Anand, and Chaboche constitutive models. This paper details the EINN formulation with its neural network format, the definition of each neuron, the appropriate pre-processing techniques, and the post-processing of the coefficients.
The extraction of coefficients for the hyperbolic Garofalo and Anand models was conducted using experimental results from lead-free SAC305 solder material studies by Xiao and Armstrong [10] and Motalab et al. [12,13]. Our report includes the employed pre-processing mapping techniques and parameters. With the dataset preference, the constitutive equations with extracted coefficients performed better in the interested zone.
Comparisons with coefficients of the constitutive equations from the aforementioned studies demonstrated that those extracted from the EINN formulation were alike. Importantly, the mean square error (MSE) of the EINN formulation learning was comparable Figure 13. The temperature-dependent stress-stress curves from Chaboche model using the coefficients in Table 11.

Conclusions
In this study, we developed the concept of Equation-Informed Neural Networks (EINNs) as an efficient method for extracting the coefficients of constitutive equations. Subsequently, the MCMC with numerical Bayesian Inference (BI) iterations was applied to estimate the distribution of these coefficients, thereby further refining them.
The EINN formulation was derived by leveraging graphical representation techniques to convert the mathematical form of constitutive equations into an equivalent EINN format. By carefully adjusting pre-processing mapping parameters and identifying dataset preferences, we could generate coefficients optimally aligned with the targeted application scenario.
The EINN formulation has been successfully applied to the hyperbolic Garofalo, Anand, and Chaboche constitutive models. This paper details the EINN formulation with its neural network format, the definition of each neuron, the appropriate pre-processing techniques, and the post-processing of the coefficients.
The extraction of coefficients for the hyperbolic Garofalo and Anand models was conducted using experimental results from lead-free SAC305 solder material studies by Xiao and Armstrong [10] and Motalab et al. [12,13]. Our report includes the employed pre-processing mapping techniques and parameters. With the dataset preference, the constitutive equations with extracted coefficients performed better in the interested zone.
Comparisons with coefficients of the constitutive equations from the aforementioned studies demonstrated that those extracted from the EINN formulation were alike. Importantly, the mean square error (MSE) of the EINN formulation learning was comparable to those from the literature [10,12,13]. The performance of the MSE depends on many factors, such as the prescription capability of the material model and experimental measurement accuracy. In this research, the MES is applied as a comparison of how the coefficients extracted by the EINNs perform to the ones obtained by the original methods.
Moreover, the MCMC with numerical Bayesian Inference (BI) iteration technique was employed to analyze the robustness of the extracted coefficients against the experiment data, as shown in Figures 7, 9 and 12. A slightly higher variation was observed when the dataset preference was applied to the EINN learning. Nevertheless, the coefficients derived from EINNs remained within a ±5% confidence interval.
In conclusion, the combined use of EINNs with BI provides a powerful tool for extracting coefficients from temperature-and strain-rate-dependent constitutive equations with dataset preference. This is under the assumption that the SAC305 solder material characteristics can be described by the material model and that the experimental measurement is accurate enough. This approach provides the coefficients' value and the distribution of coefficients against the training dataset.
This study's potential limitations may include the dataset preference assumption, which may not universally apply across all scenarios. Additionally, the applicability of the EINN formulation to all forms of constitutive equations remains to be fully determined, necessitating further exploration of potential limitations. Moreover, advanced neural network backpropagation methods, such as Levenberg-Marquardt (LM) algorithm, will be applied to EINN frameworks.