Next Article in Journal
Influence of Bentonite Particles on the Mechanical Properties of Polyester–Sisal Fiber Composites
Next Article in Special Issue
Prediction Models of Mechanical Properties of Jute/PLA Composite Based on X-ray Computed Tomography
Previous Article in Journal
Three-Dimensional Printing of Shape Memory Liquid Crystalline Thermoplastic Elastomeric Composites Using Fused Filament Fabrication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Generalizability Machine Learning Framework for Analyzing the Homogenized Properties of Short Fiber-Reinforced Polymer Composites

1
School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092, China
2
Department of Aeronautics and Astronautics, Fudan University, Shanghai 200433, China
*
Authors to whom correspondence should be addressed.
Polymers 2023, 15(19), 3962; https://doi.org/10.3390/polym15193962
Submission received: 29 August 2023 / Revised: 25 September 2023 / Accepted: 25 September 2023 / Published: 30 September 2023

Abstract

:
This study aims to develop a high-generalizability machine learning framework for predicting the homogenized mechanical properties of short fiber-reinforced polymer composites. The ensemble machine learning model (EML) employs a stacking algorithm using three base models of Extra Trees (ET), eXtreme Gradient Boosting machine (XGBoost), and Light Gradient Boosting machine (LGBM). A micromechanical model of a two-step homogenization algorithm is adopted and verified as an effective approach to composite modeling with randomly distributed fibers, which is integrated with finite element simulations for providing a high-quality ground-truth dataset. The model performance is thoroughly assessed for its accuracy, efficiency, interpretability, and generalizability. The results suggest that: (1) the EML model outperforms the base members on prediction accuracy, achieving R 2 values of 0.988 and 0.952 on the train and test datasets, respectively; (2) the SHapley Additive exPlanations (SHAP) analysis identifies the Young’s modulus of matrix, fiber, and fiber content as the top three factors influencing the homogenized properties, whereas the anisotropy is predominantly determined by the fiber orientations; (3) the EML model showcases good generalization capability on experimental data, and it has been shown to be more effective than high-fidelity computational models by significantly lowering computational costs while maintaining high accuracy.

1. Introduction

Applications of short fiber-reinforced polymer composites (SFRPCs) have been rapidly growing in fields such as bionic manufacturing, automotive, and aviation industries due to their lightweight, high strength, and ease of processing and manufacturing [1,2]. Significant advantages of SFRPCs over continuous fiber composites are that they are easily produced materials, which are more suitable for rapid and high-volume manufacturing in complex geometries with lower expenses.
The mechanical performance of SFRPCs strongly depends on the microstructural parameters, such as the Young’s modulus and Poisson’s ratio of the fibers and matrix, fiber dimensions and volume content, and the alignment distribution of fibers [3]. In particular, the orientation of fibers and the number of fibers aligned with the loading conditions can be customized by the injection molding process in manufacturing [1]. Engineers and scientists can evaluate materials’ functional characteristics before manufacturing, cut down on pointless experimental testing, streamline the entire design process, and lay a crucial foundation for expanding their applications and improving material performance by accurately predicting the mechanical performance of SFRPCs.
The orientation distribution of the fibers of SFRPCs can be experimentally examined using microscopic techniques like optical diffraction, electron microscopy, and X-ray radiography [4,5]. Image-informed statistical models are accordingly proposed to address the fiber distributions such as the diffusion orientation distribution function [6]. However, the complexity and diversity of fiber orientation states also present the primary challenge in aspects of constitutive relations [7], computational models [8], and numerical implementations [9].
Great efforts have been devoted to performing multiscale simulations utilizing Representative Volume Element (RVE) models to involve more accurate distributed fibers [10]. RVEs often fail to effectively reflect the intricate spatial fiber orientation, and increasing algorithm difficulties occur with rising fiber volume content [11,12]. The high computational time cost of Finite Element Analysis (FEA) presents another significant hurdle in simulations. The two-step homogenization method is proposed as an alternative effective approach [3,13]. The hybrid framework employs an Orientation Averaging (OA) approach to derive the mechanical properties on the basis of analysis of RVEs involving unidirectional fibers. This method significantly reduces modeling complexity and offers broader applicability. Nevertheless, it still cannot avoid the time-consuming nature of finite element calculations [14].
In recent years, machine learning (ML) techniques have been widely adopted in relevant fields of biochemistry [15], material science [16,17], and mechanical performance analysis [18,19]. with an end-to-end prediction paradigm, using a simulated dataset is commonly reported [19,20]. As suggested in [21], by providing enormous amounts of relevant data, numerical modeling naturally complements the ML technique and aids in creating reliable data-driven models. Simulation data with composite RVEs have been widely used to develop ML surrogates for fast prediction of the homogenized properties of composites [22,23,24].
The model selection is the key to determining the model’s performance, which varies depending on the complexity and heterogeneity of the microstructures. The simplest tree models and supported vector regressions (SVR) are efficient in dealing with homogenized elastic properties of 2D materials with idealized well-arranged microstructures [25]. Meanwhile, deep neural networks (DNN) have demonstrated efficiency in discovering distinct structures and learning the nonlinear relationship between the feature vector of microstructures and the expected effective mechanical properties [14,26,27]. Advanced image-based convolutional neural networks (CNN) and generative adversarial networks (GAN) display their strong capability in predicting the stress–strain curve of composites directly from RVEs that embed cracks and voids [28,29].
For composites with short fiber inclusions, CNN and DNN models are constantly used to investigate homogenized properties from the three-dimensional diverse spatial microstructures of fibers. Ref. [30] suggested a CNN model that takes the 2D RVE image as input with a different range of Young’s modulus of carbon fibers and neat epoxy, and obtains output as a visualization of the stress components. Ref. [14] utilized a DNN model that incorporates a micromechanical model to investigate the homogenized elastic properties of short fiber-reinforced composite. On one hand, training neural networks, particularly the CNN algorithm, is very time-consuming. On the other hand, existing works pay more attention to finding super-machine-learning models with capabilities beyond human limits while ignoring the key interpretability [31,32], resulting in the unknown mechanism of how the features modulate the machine-learning models. Lack of interpretability makes the data-driven models untrustworthy and would be problematic in model generalization [33,34].
The selection of appropriate machine learning models and efficient explanation strategies are receiving more attention [35]. Ensemble machine learning (EML) [36] is a general meta-approach machine learning that seeks better predictive performance by combining the predictions from multiple weak simple learners, such as tree models, SVR, and Gradient Boosting models. The EML model has demonstrated its application in accelerating composite material performance investigation [37,38] and optimization design [39,40]. Regarding the model interpretability methods, the SHapley Additive exPlanations (SHAP) method has gained increasing popularity, which explains how each feature influences the model and allows local and global analysis [41]. SHAP analysis has offered valuable insights into the disciplinary fields of composite performance analysis [19,37,42,43] and sensor fault detection [44], as well as manufacturing processes [45].
In this study, we propose an interpretable and high-generalizability data-driven model for accurately predicting the homogenized mechanical properties of SFRPCs. We utilize a staking algorithm of three base learners of ET, XGBoost, and LGBM. The dataset is constructed by numerical simulations using composite RVEs that are integrated with the two-step homogenization method. We comprehensively assess the model’s accuracy, efficiency, interpretability, and generalization capability. In particular, a SHAP analysis is performed to extensively evaluate the influence features and the underlying mechanisms on predictions. The model’s capacity to generalize is thoroughly investigated by including experimental data that take into account the real fiber distribution of SFRPCs as reported in the literature [46,47].

2. Material and Method

2.1. Two-Step Homogenization Procedure

2.1.1. Orientation Tensor for Describing Fiber Distribution

Figure 1 depicts the fiber orientation within a unit sphere Ω . Within a Cartesian coordinate system, the orientation vector p can be properly defined using two angular values [ θ , α ] [48]:
p = p i e 1 + p j e 1 + p k e 3 = ( c o s α · c o s θ ) e 1 + ( c o s α · s i n θ ) e 2 + ( s i n α ) e 3 .
The probability distribution function ψ ( p ) can be used to address the orientation of all fibers within the region [48]:
ψ ( p ) = ψ ( θ , α ) , θ ( 0 , π ) , α ( π 2 , π 2 ) ,
such that the fiber distributions with a specific region ω can be statistically described via a second-order orientation a, which is derived from the probabilities:
a = a 11 a 12 a 13 a 22 a 23 s y m a 33 , a i j = Ω p i p j ψ ( p ) d p .
The tensor a possesses properties of symmetry with a i j = a j i , and it holds a trace equal to 1. And the orientation tensor can be further decomposed into a diagonal tensor Λ and a rotation one R:
a = R ( γ 1 , γ 2 , γ 3 ) Λ R T ( γ 1 , γ 2 , γ 3 ) ,
where Λ = a 1 a 2 a 3 , γ 1 , γ 2 and γ 3 are rotation angles around axis of e 1 , e 2 and e 3 , respectively.

2.1.2. The Two-Step Homogenization Method

The process of the two-step homogenization method is illustrated in Figure 2. Composites having aligned short fibers with uniform length and mechanical properties are first considered to build the unidirectional RVEs (noted as UD RVEs). Accordingly, their homogenized properties can be obtained via EF simulations. Afterward, OA is applied to UD RVEs with orientation tensors to calculate the properties of composite RVEs with distributions of fiber orientation, without modeling of RVEs with realistically distributed fibers [3,9].
Derivation of the stiffness tensor can be obtained as [13]:
C i j k l = B 1 ( a i j k l ) + B 2 ( a i j δ k l + a k l δ i j ) + B 3 ( a i k δ j l + a i l δ j k + a j l δ i k + a j k δ i l ) + B 4 ( δ i j × δ k l ) + B 5 ( δ i k × δ j l + δ i l × δ j k )
B 1 = C 11 U D + C 22 U D 2 C 12 U D 4 C 66 U D B 2 = C 12 U D C 23 U D B 3 = C 66 U D + 1 2 ( C 23 U D C 22 U D ) B 4 = C 23 U D , B 5 = 1 2 ( C 22 U D C 23 U D ) ,
where C i j k l represents the stiffness matrix of the desired arbitrary orientation RVE, a i j and a i j k l denote its second-order and fourth-order orientation tensors respectively, and δ i j is the Kronecker delta. B i ( i = 1 5 ) in Equation (6) are parameters that are derived from the stiffness matrix of UD RVE C i j k l U D .
The 21 components of the stiffness matrix C can be acquired from Equations (5) and (6):
C = C 11 C 12 C 13 C 14 C 15 C 16 C 22 C 23 C 24 C 25 C 26 C 33 C 34 C 35 C 36 C 44 C 45 C 46 s y m C 55 C 56 C 66 .

2.2. Feature Selection and Data Preparation

2.2.1. Feature Selection

The homogenized mechanical properties of SFRPCs are mainly determined by the properties of matrix and fiber, as well as the microstructure parameter of fibers [2]. For this specific problem, we intend to make the prediction of the composite matrix using 12 features, with the selection based on the strategies discussed in [7,14].
The 12 input features include 7 properties [ E m , ν m , E f , ν f , d, a, V F ], and 5 individual components of the orientation tensor in Equation (3) for indicating the distribution of the short fibers. E and ν denote Young’s modulus and Poisson’s ratio, respectively, with the subscript m and f representing the matrix and fiber. d, a, and V F describe the fiber diameter, the ratio of fiber length to diameter, and the volumetric fraction of fibers.

2.2.2. Data Preparation

The dataset is constructed by numerical simulations using composite RVEs with the two-step homogenization method.
Firstly, 180 sets of UD RVEs are set up for parametric simulations in ABAQUS 6.13. The parameter space in UD RVE simulations is designed as in Figure 3. The parameter pairs for matrix Young’s modulus and Poisson’s ratio (Figure 3a), fiber Young’s modulus and Poisson’s ratio (Figure 3b), fiber diameter and fiber aspect ratio (Figure 3c), and fiber volume fraction and diameter (Figure 3d) are chosen [14]. The fiber volume fraction V F follows a normal distribution and the rest of the parameter pairs obey uniform distributions.
Secondly, for each UD RVE, 60 different orientation tensors are applied using the OA method, yielding a total of 10,800 sets for calculation of the homogenized mechanical properties. The uniformity of orientation distributions is explored by creating diagonal tensors initially and then employing the rotation angles.
In particular, we choose ten diagonal tensors, three of which represent unidirectional distribution ( a = 1 0 0 ), two-dimensional random distribution ( a = 0.5 0.5 0 ), and three-dimensional isotropic distribution ( a = 0.33 0.33 0.33 ). The remaining seven tensors symbolize randomly distributed fibers in three-dimensional space. Thus, for each set of UD RVEs, 60 different orientation tensors can be obtained by randomly rotating each diagonal tensor five times. Additionally, to prevent excessive concentration of angles uniform distribution is applied to γ 1 and γ 3 in the interval ( 0 , π ) , and the following formula is used to sample γ 2 :
t = ( γ 2 s i n ( γ 2 ) ) / π , 0 < t < 1 ,
where t is selected randomly in interval ( 0 , 1 ) .
Figure 4 displays the distribution of diagonal tensor and rotation angles, created according to the process explained above. In Figure 4a, greater scatterings of data points are seen at the triangular surface’s vertices, particularly in the case of a planar or 3D-random surface. This is a result of the limitations of the parameter space and the random generation of fiber orientation sets as in Equation (8). Figure 4b shows the randomness of the first 2000 sets of angles. Three angles in each set are used to rotate the diagonal orientation tensors around three axes. It can be found that there are no significant gaps or clusters of data points in the distribution.

2.2.3. Data Analysis

The dataset with 10,800 samples is collected by the hybrid simulation framework using the two-step homogenization algorithm. Figure 3 and Figure 4 visualize the distribution of the input features following the statements in [14]. Figure A1 in Appendix A further describes the distribution of each parameter, and Figure A2 shows the correlation matrix that is obtained using a correlation study analysis.
The correlation analysis is presented via the criterion based on the Pearson Correlation Coefficient (PCC) [49], which is a measure of the linear dependence between two random variables. The PCC has a number between –1 and 1 that measures the strength and direction of the relationship between two variables. As noted in Figure A2, the input features are basically independent, while the tensor variables of a 11 and a 22 in Figure 4 implying negative but still weak correlations. This is aligned with the derivation of the diagonal tensor components in the two-step homogenization method, and the understanding of the mechanical definition in the knowledge about fiber-reinforced polymer composites [50].
However, as stated in [51], PCC cannot reflect a certain correlation with each other. It is therefore necessary to apply the optimal ML model combined with the ML explainer based on SHAP to investigate the internal relations between model outputs and input features.
More knowledge and mathematical theories for PCC can be found in [36].

2.3. Ensemble Machine Learning

This study uses a stacking ensemble model [37,38], which involves fitting numerous model types to the same data and using a different model to learn how to combine the predictions most effectively.

2.3.1. Base Learners

Enlightened by reference works that are devoted to the homogenization problem within composites that are reinforced with different fiber types in [12,14,22,23,24,25,28,37,52], as we discussed in the introduction section, we have experimented with a variety of base models based on the dataset, ranging from the basic SVR, RF, and DT models to more complicated ET, XGBoost, and LGBM models.
The preliminary comparison investigation also found that the basic yet interpretable model does not perform well on this challenge ( R 2 values less than 0.85), given the distribution of short fibers introduces more features that influence the anisotropy of the composites. This is consistent with the findings of earlier studies that used simulated datasets to predict composite homogenization. We therefore subsequently decided on three options for designing the EML model: ET, XGBoost, and LGBM.
In particular, the ET algorithm is an enhancement of the Random Forest algorithm that employs a “Bagging” technique and consists of numerous decision tree learners [53,54]. Both XGBoost and LGBM use a gradient-boosting framework. LGBM performs leaf-wise (vertical) growth as opposed to level-wise (horizontal) growth in XGBoost, which produces more loss reduction and, as a result, improved accuracy while being faster [55]. As such, those models are trained on the unchanged train dataset, ensuring that they make different assumptions and, in turn, have less correlated prediction errors.
It should be noted that although the base learners are complicated models, they are far simpler than neural network-based models. Appendix B summarizes the preliminary knowledge for the base learners, such as backgrounds, objective functions, and loss functions. In terms of models such as SVR, RF, and DT models, which could have been learned in-depth in Reference [36], the basic introductory knowledge will not be addressed in this work.
Furthermore, functional interfaces for multiple classification, regression, and clustering methods are available in the machine learning library Scikit-learn [56]. The three base learners and models of SVR, RF, and DT are also included. Users can simply utilize the library to create models using the Python programming language after they have a basic understanding of the basic mathematical concepts.

2.3.2. Stacking Mode

As shown in Figure 5, the stacking model takes into account diverse base learners, trains them concurrently, and then combines them by training a meta-learner to produce a prediction based on the predictions of the various base learners.
The base learners are trained using the original dataset, in which the 5-fold cross-validation is employed to avoid overfitting. Each base learner has different hyperparameters that need to be optimized. An ensemble learning model inputs the predictions as the features and the target is the ground-truth values in the dataset, it attempts to learn how to best combine the input predictions to make a better output prediction [36].
This work trains the EML model on the constructed dataset to predict the 21 components of the stiffness matrix based on the 12 input features.

2.4. The SHAP-Based Interpretation Analysis

Low interpretability is often caused by machine learning models’ complexity and highly non-linear architecture. Compared to conventional approaches that just specify what feature is crucial but fail to explain how that feature influences the predictions [57,58], the SHAP method offers a more effective means for evaluating the “black box” of machine learning models [41]. Based on cooperative game theory, the method uses SHAP values to quantify the impact of individual features by calculating the contribution among various permutations of feature combinations.
The explanatory model f ( x ) in response to a input feature x can be expressed as [41]:
f ( x ) = φ 0 + i = 1 N φ i x i ,
where φ 0 is a constant of “ base value” depicting the average of all predicted values, and N is the total number of input features, and x i is a binary feature value where x i = 1 represents a present feature and x i = 0 represents absent features. φ i is the SHAP value of the i-th feature that measures the contribution of the i-th feature from a composition of all features.
The SHAP analysis is compatible with various model types. In SHAP, each feature is assigned an importance value, and the influence of all the features on the predicted responses can be assessed via global and local interpretations. According to Equation (9), the global interpretation uses SHAP values to depict feature relevance with a positive or negative impact on predictions, whereas the local interpretation generates SHAP values and illustrates how the fed-in features contribute to that specific prediction [32,44].

2.5. Hyperparameter Optimization and Model Training

We build, train, and optimize the EML model with the PyTorch package in Python [59].
The model hyperparameter adjusting performs a crucial factor in the prediction performance of the ML model. Selecting the optimal values for the hyperparameters of a machine learning algorithm is a critical step for developing an accurate and trustworthy model [60]. This work implements hyperparameters optimization, using the grid search method [61,62], to detect all possible values in the search space in order to find the optimal structure for the three base learners. Additionally, 5-fold cross-validation is used to overcome the issue of overfitting and is combined with a hyperparameter tuning technique based on grid search to identify the optimal values.
Basic knowledge of ML techniques of grid search and K-fold cross-validation are provided in Appendix C.1 and Appendix C.2, respectively. For the three base learners, each algorithm had its unique hyperparameter to be optimized, with a detailed description provided in Appendix C.3.
The optimal values of hyperparameters of the three base learners are listed in Table 1.

3. Results and Discussion

3.1. Validation of Dataset Based on the Two-Step Homogenization Method

The method utilizes a statistical tensor to effectively involve the fiber orientations in homogenization calculations [2,14,63]. A brief validation analysis is first conducted to ensure the accuracy of the two-step homogenization method.
Figure 6 demonstrates one UD RVE and four RVEs with different fiber rotation angles. The microstructural parameters for UD RVE (specified as α = 0 , β = 0 ) in Figure 6a and the simulated homogenization mechanical properties are given Table 2 and Table 3, respectively.
The homogenized mechanical properties for RVEs in Figure 6b–e are first calculated using the two-step homogenization method. Meanwhile, full FEA simulation results using the RVEs are conducted in ABAQUS to verify the above hybrid approach, with the comparison shown in Table 4.
It can be seen that, compared to the FEA-determined results, the average error of the calculated results using the two-step homogenization method is 0.82 % . Hence, the homogenized mechanical properties of the SFRPC RVEs with the considered fiber orientations derived by the two-step homogenization framework are validated as reliable and effective.

3.2. Model Evaluation on Prediction Accuracy

3.2.1. Evaluation Metrics

The coefficient of determination R 2 , mean squared error (MSE), and mean absolute percentage error (MAPE) are adopted as evaluation metrics, with the following formulas [64]:
R 2 = 1 ( y ^ i y i ) 2 ( y ¯ y i ) 2
M S E = 1 n n i = 1 ( y i ^ y i ) 2
M A P E = 100 % n i = 1 n y i ^ y i y i
where y i is the ground-truth of the i-th sample point in the database, y ^ i is the predicted values by the EML model, y ¯ is the average value of all sample, and n is the number of sampels.

3.2.2. Accuracy Comparison between the EML and Base Learners

The EML model combines three basic learners to make predictions of the matrix with components from 12 input features. Table 5 summarizes the comparison of the prediction accuracy of the EML model with three base learners on both the training and testing set. On an overall basis, the model’s accuracy on the training dataset is higher. Among the base learners, the LGBM model achieves the highest R 2 values of 0.984 and 0.946 on the train and test datasets, respectively. It also performs the best in the two other evaluation metrics of MSE and MAPE. The three base learners are outperformed by the EML model in each evaluation metric ( R 2 = 0.988 , M S E = 2.545 × 10 6 , M A P E = 0.567 % on train, and R 2 = 0.952 , M S E = 1.260 × 10 16 , M A P E = 0.906 % on test), indicating an effective enhancement of the prediction accuracy.
Figure 7 further visualizes the computed R 2 with the scatter plots of predicted vs. ground truth values of matrix component C 15 using different machine learning models. It can be seen that the ET model is the least accurate predictor of the three base learners, with R 2 values on the train and test sets of 0.951 and 0.894, respectively. In contrast, the R 2 value for the EML model on the train and test sets, respectively, is promoted to 0.969 and 0.908, making it the best fitter.

3.2.3. Model Performance of the EML Model on a Testing Sample

To fully evaluate the model’s efficacy in making predictions on the 21 components of matrix C, Table 6 summarizes the evaluation metrics of R 2 , M S E , and M A P E of the EML model’s outputs on a testing sample. It can be observed that the EML model predicts 9 features with high precision, such as C 11 , with a R 2 value greater than 0.999. The prediction accuracy is slightly reduced but still reaches a high level of R 2 > 0.90 for the remaining 12 outputs, such as C 14 , maintaining the appearance of a well-fitted model.
The variation in prediction accuracy is commonly seen in developing data-driven models, which is attributed to the dataset construction, and the selection of input features [37]. Overall, the EML model showcases superior accuracy in terms of using 12 fed-in features and predicting the matrix of the polymer composites.

3.3. Model Interpretation via SHAP Analysis

In order to provide a comprehensive overview of how anisotropic mechanical properties are influenced by microstructural variables, the original output stiffness matrix C in Equation (7) has been translated into homogenized Young’s modulus E, Poisson’s ratio ν , and Shear modulus G. Most concerned Young’s modulus E 11 , E 22 , and E 33 , which characterize the anisotropy of SFRPCs, are specifically subjected to global and local SHAP analysis [11].

3.3.1. Global Interpretation

Figure 8 presents a global interpretation of feature importance analysis for E 11 , E 22 , and E 33 . Figure 8a–c are bar plots of SHAP values, which illustrate the importance rank, and direction of each input feature. As shown, the homogenized Young’s modulus is primarily influenced by three factors: the Young’s modulus of E m , E f , and the fiber content V F . Notably, all of these factors exert a positive influence on the homogenized Young’s modulus. This phenomenon arises from the fact that E m and E f represent vital mechanical properties of composite material constituents, while V F governs the proportion of strengthening components. In general, enhancing both the mechanical properties of constituents and increasing the content of reinforcement components contributes to the improved homogenization performance of the composite material.
The orientation tensor components a 11 , a 22 , and a 33 play a pivotal role in determining the anisotropy of composites. It can be observed that Young’s modulus E 11 in Figure 8 is markedly and positively impacted by the feature a 11 . This occurs because a 11 characterizes the distribution of fiber orientation in the 1-direction, with larger values of a 11 indicating a greater presence of fibers aligned in the 1-direction. Similar findings show that the features defining fiber orientation a 22 and a 33 appropriately determine anisotropy for E 22 in Figure 8b and E 33 in Figure 8c. It is worth noting that, in Figure 8c, a 11 and a 22 display negative influences on E 33 , which attributes to the relation between the diagonal tensor of a 33 = 1 a 11 a 22 , as described in Figure 4.
Figure 8d–f are represented as beeswarm plots that highlight the significance of the feature in relation to the actual relationships with the predicted outcomes. The beeswarm plots reveal the same feature information as the bar plots. The horizontal position of the dot is determined by the SHAP value of that feature, and the dots “pile up” along each feature row to rank the feature’s importance. The color code represents the original numerical value of a feature. For example, the high value of E m and E f will increase the value of Young’s modulus in each direction.
Both the bar plots and beeswarm plots indicate that the other features, including Poisson’s ratio, fiber length-diameter ratio, and the orientation tensor’s deviatoric components, have been identified to support the predictions while having fewer influencing effects than the other aspects.
The SHAP-based explanation is consistent with theories and knowledge from mechanism analysis in simulations [11] and experimental findings [46].

3.3.2. Local Interpretation

The goal of SHAP local interpretability is to describe the contributions made by each input feature in terms of how individual predictions are made.
We select two testing samples, and Figure 9 visualizes the positive and negative contributors in predicting E 11 , E 22 , and E 33 . The red arrows indicate a positive impact of the corresponding input feature on the output, while the blue arrows signify a negative impact. According to Equation (9), the length of the arrows reflects how much the input feature has influenced the model.
For Sample (a), the small values of E m = 5.590 GPa , E f = 18.630 GPa , V F = 0.110 show negative impact and results in comparable small magnitudes of E 11 , E 22 , and E 33 . In comparison, the diagonal tensor component of a 11 , a 22 , and a 33 accordingly promotes Young’s modulus which signifies a positive impact.
Sample (b) contains a scenario that is the opposite of Sample (a). For Sample (b), the local analysis quantifies the large inputs of E m = 11.970 GPa , E f = 94.960 GPa , V F = 0.275 , leading to rather small magnitudes of E 11 , E 22 , and E 33 . By contrast, features that denote the fiber orientation ( a 11 , a 22 , and a 22 ) show negative impacts.
It can be found that the local analysis’ findings are consistent with the global interpretations, while it details the modulating mechanism of the influencing features in each individual case. Understanding the feature contributions and their underlying modulating mechanism can also provide an optimized strategy to customize the composite properties by following the local analysis [65].

3.4. Model Generalization on Experimental Data

On the prepared simulation dataset, the trained EML model exhibits excellent accuracy and efficiency with the predictions being interpretable. To determine the model’s generalizability, we further apply the EML model to the unknown dataset obtained from experiments.
The polymer composite, known as PA15 due to 15 % fiber weight fractions orginiates from [46], is a composite made of glass fibers and a polymer matrix. Another set of experimental data is derived from [47], and the material is referred to as PA6GF35, a polyamide-6 matrix with 35% (by weight) glass-fiber-reinforcement. Table 7 lists the microstructural parameters of the two sets of polymer composites, including the elastic properties of matrix and fiber, as well as the measured fiber orientation tensor. Using the given parameters, we employ the EML model to predict the macroscopic mechanical properties of those short fiber-reinforced composites.
For PA15, Figure 10 compares the experimental measurements of E 11 , E 22 , E 33 and G 23 , with the predictions from the EML model, and FEA results using Digimat software. The proposed EML model performs well in predicting elastic properties, with a maximum relative error of less than 10%.
For PA6GF35, Figure 11 illustrates a comparison between the experimental results of Young’s modulus E 11 and E 22 and the EML-predicted results. Meanwhile, two high-fidelity computational models reported from [47] are included for prediction comparison, namely the “Mean data” and “Fixed length”. It can be noted that the EML model agrees well with the experimental results with a maximum error of less than 3%, with the predicted E 11 and E 22 closely aligned with the high-fidelity models.
In terms of prediction efficiency, as shown in Table 8, using a mesh of 19,937 elements, the Digital-FE requires 2911 s to complete relevant calculations. In contrast, the EML model significantly lowers the cost of computing and produces predictions in less than one second.
The EML technique displays a highly generalized machine learning model inside the two experimental scenarios with polymer composite reinforced with various fillings of glass fibers. Furthermore, it has been demonstrated to be significantly more cost-effective than high-fidelity computational models while maintaining excellent accuracy.

4. Conclusions and Outlook

This study proposes a novel interpretable and high-generalizability ensemble learning model to efficiently predict the homogenized properties of SFRPCs. A micromechanical model of a two-step homogenization algorithm is integrated to construct a diverse and representative dataset, in which the random-distributed fibers are efficiently implemented. We perform hyperparameter optimization for each base learner of ET, XGBboost, and LGBM. The EML model is comprehensively assessed using metrics of accuracy, efficiency, interpretability, and generalization capabilities. The following conclusions can be drawn:
(1)
The two-step homogenization algorithm is validated as an effective approach with which to consider different fiber orientations, which favors the creation of a trustworthy dataset with a variety of the chosen features.
(2)
The EML model outperforms the base members of ET, XGBoost, and LBGM on prediction accuracy, achieving R 2 values of 0.988 and 0.952 on the train and test datasets, respectively.
(3)
According to the SHAP global analysis, the homogenized elastic properties are significantly influenced by E m , E f , and V F , whereas the anisotropy is predominantly determined by a 11 , a 22 , and a 33 . The SHAP local interpretation distinguishes the key modulating mechanism between the key features for individual predictions.
(4)
The EML algorithm showcases a highly generalized machine learning model on experimental data, and it is more efficient than high-fidelity computational models by drastically reducing computational expenses and preserving high accuracy.
The paradigm proposed in this study holds potential applications to customize the fiber arrangements with a different filling of fibers in polymer structure and, as a result, offers manufacturing process optimization techniques. The model is first evaluated for generalizability on the basis of two experimental cases. It can be further packaged as a Graphical User Interface (GUI) in order to improve its practical utility [66,67] on the topic of analyzing homogenized polymer composites with short fibers.
The limitations are also acknowledged. Current work focuses on analyzing the elastic properties of the SFRPCs by considering one type of reinforcing fiber in the polymer structure. The two-step homogenization method utilizes a statistical tensor to effectively involve the fiber orientations in homogenization calculations, while the cross-linking of fibers, the mechanical interactions, and the morphology of fiber curvatures are ignored [68,69]. Hybrid composites contain more than two types of fiber fillings in polymer composites [70], mechanical interactions between fibers and polymer matrix, matrix defects, voids, and failure should be considered in future investigations [71].

Author Contributions

Y.Z.: Conceptualization, Methodology, Supervision, Investigation, Funding acquisition, Resources, Writing—Original Draft, review, and editing. Z.C.: Methodology, Software, Data Curation, Formal Analysis, Writing—Original Draft, review, and editing. X.J.: Conceptualization, Methodology, Supervision, Resources, manuscript review, and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The funding supports include the National Natural Science Foundation of China (12302184), Shanghai Pujiang Talent Program (22PJ1413800), and Fundamental Research Funds for the Central Universities.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the corresponding authors.

Conflicts of Interest

We have no conflict of interest to report.

Appendix A. Input Feature Distribution and Correlation Analysis

Figure A1. Distribution of input features for 10,800 samples in the dataset.
Figure A1. Distribution of input features for 10,800 samples in the dataset.
Polymers 15 03962 g0a1
Figure A2. Correlation matrix analysis of the input features in the dataset via PCC measurement. The PCC has a number between –1 and 1 that measures the strength and direction of the relationship between two variables [36].
Figure A2. Correlation matrix analysis of the input features in the dataset via PCC measurement. The PCC has a number between –1 and 1 that measures the strength and direction of the relationship between two variables [36].
Polymers 15 03962 g0a2

Appendix B. Preliminaries on ML Models Used in This Study

Three ML models are introduced according to their distinguished architecture, namely, ET, XGBoost, and LGBM.

Appendix B.1. ET

ET [72] is a machine learning ensemble algorithm whose basic component is a decision tree.
For regression, the average of the outputs of all decision trees is used as the final output of ET. Every decision tree in the ensemble functions as an “individual learner” given it is an ensemble algorithm (see Figure 1 in Ref. [73]). The accuracy of ensemble learning is higher when the number of individual learners is fixed and their level of independence is higher.
As noted in Figure A3, the ET model is established according to the following strategy: (1) All the samples from the training dataset are used to create each decision tree, while the sample’s feature selection is done at random; (2) When splitting a node, for each non-constant feature contained in that node, an arbitrary number between the maximum and minimum value of the feature is randomly selected. (3) The sample is divided into the left branch if the feature value in it is greater than this value; otherwise, it is divided into the right branch. The samples in the node are then randomly assigned to the two branches covered by this feature. All the features in the node are traversed, and splitting values of all the features are obtained as described above. The node splits using the form with the highest splitting value. The ensemble learning model is improved as a result of the random processing approach used by ET to increase the independence of each decision tree.
Figure A3. Shematic diagram of Extra Trees.
Figure A3. Shematic diagram of Extra Trees.
Polymers 15 03962 g0a3
For the detailed mathematical theory of ET, please refer to Ref. [72], which will not be introduced further in this paper.

Appendix B.2. XGBoost

XGBoost is one of the most popular boosting tree algorithms for gradient boosting machines (GBM). Due to its outstanding problem-solving abilities and low demands on feature engineering, it has seen widespread application in the industrial sector. The fundamental idea of XGBoost is to continuously add and train new trees to fit residual errors from the previous iteration. The training error and the regularization, written as follows, constitute the objective function of XGBoost [64]:
Obj ( θ ) = L ( θ ) + Ω ( θ )
where L represents the loss function used to calculate how far the projected values deviate from the actual values. Ω measures the training model’s complexity to prevent overfitting.
Ω ( θ ) = γ T + 1 2 λ ω 2
where T denotes the total number of leaf nodes and ω reflects the score of each leaf node. γ and λ are controlling factors used to prevent overfitting. overfitting.
When a new tree is created to fit the residual errors of the last iteration, the objective function is therefore approximated via an appropriate function:
L ( t ) = i = 1 n l y i , y ^ i ( t 1 ) + f t x i + Ω f t
L ( t ) i = 1 n l y i , y ^ i ( t 1 ) + g i f t x i + 1 2 h i f t 2 x i + Ω f t
where g i is the first-order derivative and h i denotes the second-order derivative.
Since each instance will eventually be assigned to a single leaf node, all instances that are members of that leaf node can be put back together as:
Obj ( t ) j = 1 T i I j g i w j + 1 2 i I j h i + λ w j 2 + γ
Therefore, the objective function can be derived as:
Obj = 1 2 j = 1 t G j 2 H j + λ + γ T

Appendix B.3. LGBM

LGBM itself is a boosting ensemble model that transforms coupled weak learners into a potential model, and such an algorithm has been provided as open-source since 2017 [74]. LGBM enhances the capabilities of Gradient-Boosted Decision Trees (GBDT) models in terms of increasing the efficiency of the model and reducing memory usage.
The model employs a histogram-based algorithm to mitigate the effects of high dimensional data, accelerate the computational time, and prevent the forecasting system from overfitting. This boosting technique consists of transforming the continuous floating-point eigenvalues into l integers and builds a histogram shape with depth restrictions and k width. It adopts a presorted-based Decision Trees (DT) technique. The training of LGBM additionally utilizes parallel learning using a parallel voting DT. For selecting the top-k samples, the initial samples are distributed into multiple trees with the Local Voting Decision (LVD) algorithm. The top-k LVD attributes are gathered by the global voting decision in order to calculate the top-k-2k attributes for the k iterations procedure.
The Leaf-wise approach is used by LGBM during the optimization phase to identify suitable leaves. The objective function of LGBM is given as [75]:
Obj ( t ) = L ( t ) + Ω ( t ) + c
L ( t ) = n = 1 n y i ( t ) ( y ^ ) i ( t ) 2
where Ω ( t ) and L ( t ) signify the regular and loss functions respectively, c stands for the extra parameter preventing overfitting and optimizing the depth of the tree, and t denotes the sampling time. The loss function L ( t ) represents the fitness of the model from the comparison of real value y i , and predicted output ( y ^ ) i ( t ) for N samples.

Appendix C. Hyperparameter Optimization

A hyperparameter is a characteristic of a model that is external to the model and whose value cannot be estimated from data. The value of the hyperparameter has to be set before the learning process begins.

Appendix C.1. Grid search

Hyperparameters can be optimized using a variety of techniques, such as grid search, random search, bayesian optimization, etc. [76,77].
This work utilizes the grid search as the tuning technique to seek and determine the most suitable hyperparameter values. The method simply makes a complete search over a given subset of the hyperparameter space of the training algorithm (see Figure 4 in [77]). Given that the parameter space for some parameters in the machine-learning method may include spaces with real or unlimited values, it is exceedingly likely that users will need to set a boundary in order to use a grid search. Grid search suffers from high dimensional spaces, but often can easily be parallelized since the hyperparameter values that the algorithm works with are usually independent of each other.
Please see [36,77] for more information on the grid search method for machine learning models.

Appendix C.2. K-Fold Cross-Validation

The goal of cross-validation is to estimate mate generalization error. The K-fold cross-validation method is used to assess the reliability of the machine learning model. As illustrated in Figure A4, the dataset for the study case is averaged and divided into K folds by using the stratified random sampling method. One of them is used as the validation set and other (K-1) folds are used as the training set. The process should be repeated K times, in order that each of the K folds is used as the validation set once. Then, the cross-validation error can be calculated and evaluated.
Figure A4. Schematic diagram for the K-fold cross-validation with K = 5.
Figure A4. Schematic diagram for the K-fold cross-validation with K = 5.
Polymers 15 03962 g0a4
Mode details for cross-validation for machine learning models, please refer to References [36,77].

Appendix C.3. Hyperparameters for the Base Learners

Each machine learning model typically has a multitude of hyperparameters, and most of these machine learning algorithms come with the default values of their hyperparameters. Adjusting other hyperparameters often yields minimal changes in the model’s predictive performance. Some of the default values perform well on the project, and only some key hyperparmeters truly impact the optimization process [36,78].
ET is an ensemble machine learning algorithm based on decision trees, and the main hyperparameters of ET include [72]:
  • n_estimators is the number of decision trees.
  • max_depth is the max depth of trees.
  • min_samples_split is the least quantity of samples required for leaf splitting
  • min_samples_leaf is the minimum number of sample numbers after child node splitting
  • max_features is the number of features considered for a split.
XGBoost algorithm provides a large range of hyperparameters [64,79].
  • n_estimators controls the number of trees in the model. Increasing this value generally improves model performance, but can also lead to overfitting.
  • max_depth is used to limit the tree depth explicitly.
  • learning_rate determines the step size at each iteration while moving toward a minimum of a loss function.
  • gamma is the minimum loss reduction to create a new tree-split.
  • min_child_weight is the minimum sum of instance weight needed in a child node.
  • subsample denotes the proportion of random sampling for each tree.
  • colsample_bytree is the subsample ratio of columns when constructing each tree.
The three main hyperparameters in XGBoost are the learning rate, the number of trees, and the maximum depth of each tree. These hyperparameters control the overall behavior and performance of the model.
LGBM uses the leaf-wise tree growth algorithm [79,80].
  • num_leaves is the maximum number of leaves, the main parameter to control the complexity of the tree model.
  • min_data_in_leaf very important parameter to prevent over-fitting in a leaf-wise tree.
  • max_depth is used to limit the tree depth explicitly.
  • n_estimators is the number of boosted trees to fit.
  • learning_rate controls the step size at which the algorithm makes updates to the model weights.
  • colsample_bytree is the subsample ratio of columns when constructing each tree.
  • subsample dictates the proportion of random sampling for each tree.
For more introductive information on hyperparameters of machine-learning models, please refer to References [36,60,76,81].

References

  1. Hsissou, R.; Seghiri, R.; Benzekri, Z.; Hilali, M.; Rafik, M.; Elharfi, A. Polymer composite materials: A comprehensive review. Compos. Struct. 2021, 262, 113640. [Google Scholar] [CrossRef]
  2. Tucker, C.L., III; Liang, E. Stiffness predictions for unidirectional short-fiber composites: Review and evaluation. Compos. Sci. Technol. 1999, 59, 655–671. [Google Scholar] [CrossRef]
  3. Mirkhalaf, S.; Eggels, E.; Van Beurden, T.; Larsson, F.; Fagerström, M. A finite element based orientation averaging method for predicting elastic properties of short fiber-reinforced composites. Compos. Part B Eng. 2020, 202, 108388. [Google Scholar] [CrossRef]
  4. Chen, H.; Zhu, W.; Tang, H.; Yan, W. Oriented structure of short fiber-reinforced polymer composites processed by selective laser sintering: The role of powder-spreading process. Int. J. Mach. Tools Manuf. 2021, 163, 103703. [Google Scholar] [CrossRef]
  5. Lionetto, F.; Montagna, F.; Natali, D.; De Pascalis, F.; Nacucchi, M.; Caretto, F.; Maffezzoli, A. Correlation between elastic properties and morphology in short fiber composites by X-ray computed micro-tomography. Compos. Part A Appl. Sci. Manuf. 2021, 140, 106169. [Google Scholar] [CrossRef]
  6. Bloy, L.; Verma, R. On Computing the Underlying Fiber Directions from the Diffusion Orientation Distribution Function. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2008, New York, NY, USA, 6–10 September 2008; Metaxas, D., Axel, L., Fichtinger, G., Székely, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–8. [Google Scholar]
  7. Sun, P.Y.C.G.H. Modeling the effective elastic and viscoelastic properties of randomly distributed short fiber-reinforced composites. Compos. Commun. 2022, 35, 101341. [Google Scholar]
  8. Zhao, Y.; Siri, S.; Feng, B.; Pierce, D.M. The macro-and micro-mechanics of the colon and rectum II: Theoretical and computational methods. Bioengineering 2020, 7, 152. [Google Scholar] [CrossRef]
  9. Tian, W.; Qi, L.; Su, C.; Liang, J.; Zhou, J. Numerical evaluation on mechanical properties of short-fiber-reinforced metal matrix composites: Two-step mean-field homogenization procedure. Compos. Struct. 2016, 139, 96–103. [Google Scholar] [CrossRef]
  10. Stommel, K.B. RVE modelling of short fiber-reinforced thermoplastics with discrete fiber orientation and fiber length distribution. SN Appl. Sci. 2020, 3, 2523–3963. [Google Scholar]
  11. Cai, H.; Ye, J.; Wang, Y.; Saafi, M.; Huang, B.; Yang, D.; Ye, J. An effective microscale approach for determining the anisotropy of polymer composites reinforced with randomly distributed short fibers. Compos. Struct. 2020, 240, 112087. [Google Scholar] [CrossRef]
  12. Maharana, S.K.; Soni, G.; Mitra, M. A machine learning based prediction of elasto-plastic response of a short fiber-reinforced polymer (SFRP) composite. Model. Simul. Mater. Sci. Eng. 2023, 31, 075001. [Google Scholar] [CrossRef]
  13. Advani, S.G.; Tucker, C.L. The Use of Tensors to Describe and Predict Fiber Orientation in Short Fiber Composites. J. Rheol. 1987, 31, 751–784. [Google Scholar] [CrossRef]
  14. Mentges, N.; Dashtbozorg, B.; Mirkhalaf, S. A micromechanics-based artificial neural networks model for elastic properties of short fiber composites. Compos. Part B Eng. 2021, 213, 108736. [Google Scholar] [CrossRef]
  15. Shahab, M.; Zheng, G.; Khan, A.; Wei, D.; Novikov, A.S. Machine Learning-Based Virtual Screening and Molecular Simulation Approaches Identified Novel Potential Inhibitors for Cancer Therapy. Biomedicines 2023, 11, 2251. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, C.T.; Gu, G.X. Machine learning for composite materials. MRs Commun. 2019, 9, 556–566. [Google Scholar] [CrossRef]
  17. Ivanov, A.S.; Nikolaev, K.G.; Novikov, A.S.; Yurchenko, S.O.; Novoselov, K.S.; Andreeva, D.V.; Skorb, E.V. Programmable soft-matter electronics. J. Phys. Chem. Lett. 2021, 12, 2017–2022. [Google Scholar] [CrossRef]
  18. Zhao, J.; Chen, Z.; Tu, J.; Zhao, Y.; Dong, Y. Application of LSTM Approach for Predicting the Fission Swelling Behavior within a CERCER Composite Fuel. Energies 2022, 15, 9053. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Chen, Z.; Dong, Y.; Tu, J. An interpretable LSTM deep learning model predicts the time-dependent swelling behavior in CERCER composite fuels. Mater. Today Commun. 2023, 37, 106998. [Google Scholar] [CrossRef]
  20. Zhao, J.; Chen, Z.; Zhao, Y. Toward Elucidating the Influence of Hydrostatic Pressure Dependent Swelling Behavior in the CERCER Composite. Materials 2023, 16, 2644. [Google Scholar] [CrossRef]
  21. Alber, M.; Buganza Tepole, A.; Cannon, W.R.; De, S.; Dura-Bernal, S.; Garikipati, K.; Karniadakis, G.; Lytton, W.W.; Perdikaris, P.; Petzold, L.; et al. Integrating machine learning and multiscale modeling—perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit. Med. 2019, 2, 1–11. [Google Scholar] [CrossRef]
  22. Baek, K.; Hwang, T.; Lee, W.; Chung, H.; Cho, M. Deep learning aided evaluation for electromechanical properties of complexly structured polymer nanocomposites. Compos. Sci. Technol. 2022, 228, 109661. [Google Scholar] [CrossRef]
  23. Liu, B.; Vu-Bac, N.; Rabczuk, T. A stochastic multiscale method for the prediction of the thermal conductivity of Polymer nanocomposites through hybrid machine learning algorithms. Compos. Struct. 2021, 273, 114269. [Google Scholar] [CrossRef]
  24. Qi, Z.; Zhang, N.; Liu, Y.; Chen, W. Prediction of mechanical properties of carbon fiber based on cross-scale FEM and machine learning. Compos. Struct. 2019, 212, 199–206. [Google Scholar] [CrossRef]
  25. Miao, K.; Pan, Z.; Chen, A.; Wei, Y.; Zhang, Y. Machine learning-based model for the ultimate strength of circular concrete-filled fiber-reinforced polymer–steel composite tube columns. Constr. Build. Mater. 2023, 394, 132134. [Google Scholar] [CrossRef]
  26. Ye, S.; Huang, W.Z.; Li, M.; Feng, X.Q. Deep learning method for determining the surface elastic moduli of microstructured solids. Extrem. Mech. Lett. 2021, 44, 101226. [Google Scholar] [CrossRef]
  27. Ye, S.; Li, B.; Li, Q.; Zhao, H.P.; Feng, X.Q. Deep neural network method for predicting the mechanical properties of composites. Appl. Phys. Lett. 2019, 115. [Google Scholar] [CrossRef]
  28. Messner, M.C. Convolutional neural network surrogate models for the mechanical properties of periodic structures. J. Mech. Des. 2020, 142, 024503. [Google Scholar] [CrossRef]
  29. Korshunova, N.; Jomo, J.; Lékó, G.; Reznik, D.; Balázs, P.; Kollmannsberger, S. Image-based material characterization of complex microarchitectured additively manufactured structures. Comput. Math. Appl. 2020, 80, 2462–2480. [Google Scholar] [CrossRef]
  30. Gupta, S.; Mukhopadhyay, T.; Kushvaha, V. Microstructural image based convolutional neural networks for efficient prediction of full-field stress maps in short fiber polymer composites. Def. Technol. 2023, 24, 58–82. [Google Scholar] [CrossRef]
  31. Lipton, Z.C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
  32. Gustafsson, S. Interpretable Serious Event Forecasting Using Machine Learning and SHAP. 2021. Available online: https://www.diva-portal.org/smash/get/diva2:1561201/FULLTEXT01.pdf (accessed on 6 June 2021).
  33. Park, D.; Jung, J.; Gu, G.X.; Ryu, S. A generalizable and interpretable deep learning model to improve the prediction accuracy of strain fields in grid composites. Mater. Des. 2022, 223, 111192. [Google Scholar] [CrossRef]
  34. Zhao, Y.; Zhao, H.; Ai, J.; Dong, Y. Robust data-driven fault detection: An application to aircraft air data sensors. Int. J. Aerosp. Eng. 2022, 2022, 2918458. [Google Scholar] [CrossRef]
  35. Dong, Y.; Tao, J.; Zhang, Y.; Lin, W.; Ai, J. Deep learning in aircraft design, dynamics, and control: Review and prospects. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 2346–2368. [Google Scholar] [CrossRef]
  36. Zhou, Z.H. Ensemble Learning. In Machine Learning; Springer: Singapore, 2021; pp. 181–210. [Google Scholar]
  37. Wang, W.; Zhao, Y.; Li, Y. Ensemble Machine Learning for Predicting the Homogenized Elastic Properties of Unidirectional Composites: A SHAP-based Interpretability Analysis. Acta Mech. Sin. 2023, 40, 423301. [Google Scholar] [CrossRef]
  38. Liang, M.; Chang, Z.; Wan, Z.; Gan, Y.; Schlangen, E.; Šavija, B. Interpretable Ensemble-Machine-Learning models for predicting creep behavior of concrete. Cem. Concr. Compos. 2022, 125, 104295. [Google Scholar] [CrossRef]
  39. Milad, A.; Hussein, S.H.; Khekan, A.R.; Rashid, M.; Al-Msari, H.; Tran, T.H. Development of ensemble machine learning approaches for designing fiber-reinforced polymer composite strain prediction model. Eng. Comput. 2022, 38, 3625–3637. [Google Scholar] [CrossRef]
  40. Shi, M.; Feng, C.P.; Li, J.; Guo, S.Y. Machine learning to optimize nanocomposite materials for electromagnetic interference shielding. Compos. Sci. Technol. 2022, 223, 109414. [Google Scholar] [CrossRef]
  41. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. [Google Scholar]
  42. Bakouregui, A.S.; Mohamed, H.M.; Yahia, A.; Benmokrane, B. Explainable extreme gradient boosting tree-based prediction of load-carrying capacity of FRP-RC columns. Eng. Struct. 2021, 245, 112836. [Google Scholar] [CrossRef]
  43. Yan, F.; Song, K.; Liu, Y.; Chen, S.; Chen, J. Predictions and mechanism analyses of the fatigue strength of steel based on machine learning. J. Mater. Sci. 2020, 55, 15334–15349. [Google Scholar] [CrossRef]
  44. Li, Z.; Zhang, Y.; Ai, J.; Zhao, Y.; Yu, Y.; Dong, Y. A Lightweight and Explainable Data-driven Scheme for Fault Detection of Aerospace Sensors. IEEE Trans. Aerosp. Electron. Syst. 2023, 1–20. [Google Scholar] [CrossRef]
  45. Zhou, K.; Enos, R.; Zhang, D.; Tang, J. Uncertainty analysis of curing-induced dimensional variability of composite structures utilizing physics-guided Gaussian process meta-modeling. Compos. Struct. 2022, 280, 114816. [Google Scholar] [CrossRef]
  46. Holmström, P.H.; Hopperstad, O.S.; Clausen, A.H. Anisotropic tensile behaviour of short glass-fibre reinforced polyamide-6. Compos. Part C Open Access 2020, 2, 100019. [Google Scholar] [CrossRef]
  47. Mehta, A.; Schneider, M. A sequential addition and migration method for generating microstructures of short fibers with prescribed length distribution. Comput. Mech. 2022, 70, 829–851. [Google Scholar] [CrossRef]
  48. Zhao, Y.; Siri, S.; Feng, B.; Pierce, D. Computational modeling of mouse colorectum capturing longitudinal and through-thickness biomechanical heterogeneity. J. Mech. Behav. Biomed. Mater. 2021, 113, 104127. [Google Scholar] [CrossRef] [PubMed]
  49. Benesty, J.; Chen, J.; Huang, Y.; Cohen, I. Pearson Correlation Coefficient. In Noise Reduction in Speech Processing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–4. [Google Scholar]
  50. Mallick, P.K. Fiber-Reinforced Composites: Materials, Manufacturing, and Design; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  51. Wang, Z.; Mu, L.; Miao, H.; Shang, Y.; Yin, H.; Dong, M. An innovative application of machine learning in prediction of the syngas properties of biomass chemical looping gasification based on extra trees regression algorithm. Energy 2023, 275, 127438. [Google Scholar] [CrossRef]
  52. Guo, P.; Meng, W.; Xu, M.; Li, V.C.; Bao, Y. Predicting mechanical properties of high-performance fiber-reinforced cementitious composites by integrating micromechanics and machine learning. Materials 2021, 14, 3143. [Google Scholar] [CrossRef]
  53. Marani, A.; Jamali, A.; Nehdi, M.L. Predicting ultra-high-performance concrete compressive strength using tabular generative adversarial networks. Materials 2020, 13, 4757. [Google Scholar] [CrossRef]
  54. Ahmad, M.W.; Reynolds, J.; Rezgui, Y. Predictive modeling for solar thermal energy systems: A comparison of support vector regression, random forest, extra trees and regression trees. J. Clean. Prod. 2018, 203, 810–821. [Google Scholar] [CrossRef]
  55. Konstantinov, A.V.; Utkin, L.V. Interpretable machine learning with an ensemble of gradient boosting machines. Knowl.-Based Syst. 2021, 222, 106993. [Google Scholar] [CrossRef]
  56. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  57. Strobl, C.; Boulesteix, A.L.; Kneib, T.; Augustin, T.; Zeileis, A. Conditional variable importance for random forests. BMC Bioinform. 2008, 9, 307. [Google Scholar] [CrossRef] [PubMed]
  58. Chung, H.; Ko, H.; Kang, W.S.; Kim, K.W.; Lee, H.; Park, C.; Song, H.O.; Choi, T.Y.; Seo, J.H.; Lee, J. Prediction and feature importance analysis for severity of COVID-19 in South Korea using artificial intelligence: Model development and validation. J. Med. Internet Res. 2021, 23, e27060. [Google Scholar] [CrossRef] [PubMed]
  59. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
  60. Probst, P.; Boulesteix, A.L.; Bischl, B. Tunability: Importance of hyperparameters of machine learning algorithms. J. Mach. Learn. Res. 2019, 20, 1934–1965. [Google Scholar]
  61. Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for hyper-parameter optimization. Adv. Neural Inf. Process. Syst. 2011, 24, 2546–2554. [Google Scholar]
  62. Zhang, Y.; Zhao, H.; Ma, J.; Zhao, Y.; Dong, Y.; Ai, J. A deep neural network-based fault detection scheme for aircraft IMU sensors. Int. J. Aerosp. Eng. 2021, 2021, 3936826. [Google Scholar] [CrossRef]
  63. Zhao, Y.; Feng, B.; Pierce, D.M. Predicting the micromechanics of embedded nerve fibers using a novel three-layered model of mouse distal colon and rectum. J. Mech. Behav. Biomed. Mater. 2022, 127, 105083. [Google Scholar] [CrossRef]
  64. Dong, W.; Huang, Y.; Lehane, B.; Ma, G. XGBoost algorithm-based prediction of concrete electrical resistivity for structural health monitoring. Autom. Constr. 2020, 114, 103155. [Google Scholar] [CrossRef]
  65. Haque, M.A.; Chen, B.; Kashem, A.; Qureshi, T.; Ahmed, A.A.M. Hybrid intelligence models for compressive strength prediction of MPC composites and parametric analysis with SHAP algorithm. Mater. Today Commun. 2023, 35, 105547. [Google Scholar] [CrossRef]
  66. Le, T.T. Practical machine learning-based prediction model for axial capacity of square CFST columns. Mech. Adv. Mater. Struct. 2022, 29, 1782–1797. [Google Scholar] [CrossRef]
  67. Ralph, B.J.; Hartl, K.; Sorger, M.; Schwarz-Gsaxner, A.; Stockinger, M. Machine learning driven prediction of residual stresses for the shot peening process using a finite element based grey-box model approach. J. Manuf. Mater. Process. 2021, 5, 39. [Google Scholar] [CrossRef]
  68. Kallel, H.; Joulain, K. Design and thermal conductivity of 3D artificial cross-linked random fiber networks. Mater. Des. 2022, 220, 110800. [Google Scholar] [CrossRef]
  69. Bapanapalli, S.; Nguyen, B.N. Prediction of elastic properties for curved fiber polymer composites. Polym. Compos. 2008, 29, 544–550. [Google Scholar] [CrossRef]
  70. Swolfs, Y.; Gorbatikh, L.; Verpoest, I. Fibre hybridisation in polymer composites: A review. Compos. Part A Appl. Sci. Manuf. 2014, 67, 181–200. [Google Scholar] [CrossRef]
  71. Li, M.; Zhang, H.; Li, S.; Zhu, W.; Ke, Y. Machine learning and materials informatics approaches for predicting transverse mechanical properties of unidirectional CFRP composites with microvoids. Mater. Des. 2022, 224, 111340. [Google Scholar] [CrossRef]
  72. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  73. Chen, Z.T.; Liu, Y.K.; Chao, N.; Ernest, M.M.; Guo, X.L. Gamma-rays buildup factor calculation using regression and Extra-Trees. Radiat. Phys. Chem. 2023, 209, 110997. [Google Scholar] [CrossRef]
  74. Khan, I.U.; Aslam, N.; AlShedayed, R.; AlFrayan, D.; AlEssa, R.; AlShuail, N.A.; Al Safwan, A. A proactive attack detection for heating, ventilation, and air conditioning (HVAC) system using explainable extreme gradient boosting model (XGBoost). Sensors 2022, 22, 9235. [Google Scholar] [CrossRef]
  75. Meng, Q.; Ke, G.; Wang, T.; Chen, W.; Ye, Q.; Ma, Z.M.; Liu, T.Y. A communication-efficient parallel algorithm for decision tree. Adv. Neural Inf. Process. Syst. 2016, 29, 1279–1287. [Google Scholar]
  76. Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  77. Liashchynskyi, P.; Liashchynskyi, P. Grid search, random search, genetic algorithm: A big comparison for NAS. arXiv 2019, arXiv:1912.06059. [Google Scholar]
  78. Amin, M.N.; Salami, B.A.; Zahid, M.; Iqbal, M.; Khan, K.; Abu-Arab, A.M.; Alabdullah, A.A.; Jalal, F.E. Investigating the bond strength of FRP laminates with concrete using LIGHT GBM and SHAPASH analysis. Polymers 2022, 14, 4717. [Google Scholar] [CrossRef] [PubMed]
  79. Neelam, R.; Kulkarni, S.A.; Bharath, H.; Powar, S.; Doddamani, M. Mechanical response of additively manufactured foam: A machine learning approach. Results Eng. 2022, 16, 100801. [Google Scholar] [CrossRef]
  80. Alabdullah, A.A.; Iqbal, M.; Zahid, M.; Khan, K.; Amin, M.N.; Jalal, F.E. Prediction of rapid chloride penetration resistance of metakaolin based high strength concrete using light GBM and XGBoost models by incorporating SHAP analysis. Constr. Build. Mater. 2022, 345, 128296. [Google Scholar] [CrossRef]
  81. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
Figure 1. The probability distribution of fibers in a unit sphere Ω the one fiber direction can be expressed using two angle values of α and θ .
Figure 1. The probability distribution of fibers in a unit sphere Ω the one fiber direction can be expressed using two angle values of α and θ .
Polymers 15 03962 g001
Figure 2. Illustration of the two-step homogenization method: calculation of homogenized mechanical properties of RVEs considering. (a) unidirectional fiber distribution in two dimensions; (b) unidirectional fiber distribution in three dimensions; (c) random fiber distributions in two dimensions; (d) random fiber distributions in three dimensions.
Figure 2. Illustration of the two-step homogenization method: calculation of homogenized mechanical properties of RVEs considering. (a) unidirectional fiber distribution in two dimensions; (b) unidirectional fiber distribution in three dimensions; (c) random fiber distributions in two dimensions; (d) random fiber distributions in three dimensions.
Polymers 15 03962 g002
Figure 3. Parameter pairs for UD RVE simulations: (a) elastic properties of E m and ν m ; (b) elastic properties of E f and ν f ; (c) fiber diameter d and aspect ratio a; (d) fiber volume fraction V F .
Figure 3. Parameter pairs for UD RVE simulations: (a) elastic properties of E m and ν m ; (b) elastic properties of E f and ν f ; (c) fiber diameter d and aspect ratio a; (d) fiber volume fraction V F .
Polymers 15 03962 g003
Figure 4. Distributions of: (a) diagonal tensor a 11 , a 22 , a 33 ; (b) rotation angles of γ 1 , γ 2 , γ 3 used to rotate diagonal orientation tensors.
Figure 4. Distributions of: (a) diagonal tensor a 11 , a 22 , a 33 ; (b) rotation angles of γ 1 , γ 2 , γ 3 used to rotate diagonal orientation tensors.
Polymers 15 03962 g004
Figure 5. Illustration of the stacking algorithm.
Figure 5. Illustration of the stacking algorithm.
Polymers 15 03962 g005
Figure 6. Validation of the numerical dataset with RVEs in ABAQUS with designed fiber orientations of: (a) UD RVE ( θ = 0 , α = 0 ); (b) θ = 0 , α = 30 ; (c) θ = 45 , α = 30 ; (d) θ = 60 , α = 45 ; (e) θ = 90 , α = 45 .
Figure 6. Validation of the numerical dataset with RVEs in ABAQUS with designed fiber orientations of: (a) UD RVE ( θ = 0 , α = 0 ); (b) θ = 0 , α = 30 ; (c) θ = 45 , α = 30 ; (d) θ = 60 , α = 45 ; (e) θ = 90 , α = 45 .
Polymers 15 03962 g006
Figure 7. Scatter plots of R 2 of C 15 between the FEA-determined ground truth values and the predictions from models of ET, XGBoost, LGBM, and the EML.
Figure 7. Scatter plots of R 2 of C 15 between the FEA-determined ground truth values and the predictions from models of ET, XGBoost, LGBM, and the EML.
Polymers 15 03962 g007
Figure 8. The global interpretation of feature importance analysis for Young’s modulus of E 11 in (a,d), E 22 in (b,e), and E 33 in (c,f). Bar plots in (ac) rank the order of feature importance with the color code of red indicating positive impact and blue implying negative impact. Beeswarm plots in (df) highlight the significance of the feature in relation to predicted outcomes.
Figure 8. The global interpretation of feature importance analysis for Young’s modulus of E 11 in (a,d), E 22 in (b,e), and E 33 in (c,f). Bar plots in (ac) rank the order of feature importance with the color code of red indicating positive impact and blue implying negative impact. Beeswarm plots in (df) highlight the significance of the feature in relation to predicted outcomes.
Polymers 15 03962 g008
Figure 9. Local SHAP analysis for the E 11 , E 22 , and E 33 predictions of two testing samples: (a) Sample a and (b) Sample b.
Figure 9. Local SHAP analysis for the E 11 , E 22 , and E 33 predictions of two testing samples: (a) Sample a and (b) Sample b.
Polymers 15 03962 g009
Figure 10. Model generalization on PA15: comparison between EML predictions with experimental results and Digimat-FE calculations.
Figure 10. Model generalization on PA15: comparison between EML predictions with experimental results and Digimat-FE calculations.
Polymers 15 03962 g010
Figure 11. Model generalization on PA6GF35: comparison between EML predictions with experimental results and two high-fidelity computational models.
Figure 11. Model generalization on PA6GF35: comparison between EML predictions with experimental results and two high-fidelity computational models.
Polymers 15 03962 g011
Table 1. The optimal hyperparameters for the base learner models using grid search.
Table 1. The optimal hyperparameters for the base learner models using grid search.
Base Learner ModelHyperparametersOptimal Values
ETn_estimators160
max_depth21
min_samples_split2
min_samples_leaf1
max_features11
XGBoostn_estimators200
learning_rate0.15
max_depth6
min_child_weight2
colsample_bytree1
subsample1
gamma0
LGBMn_estimators360
learning_rate0.22
max_depth−1
min_child_weight3
colsample_bytree1
subsample0.7
gamma0
num_leaves31
Table 2. The microstructural parameters for the UD RVE in Figure 6a.
Table 2. The microstructural parameters for the UD RVE in Figure 6a.
Parameter E m [GPa] ν m E f [GPa] ν f Fiber Diameter [μm]Fiber Length [μm] VF
Magnitude1.60.4690.15285%
Table 3. The simulated homogenized mechanical properties for the UD RVE in Figure 6a.
Table 3. The simulated homogenized mechanical properties for the UD RVE in Figure 6a.
Mechanical Properteis E 11 [GPa] E 22 [GPa] E 33 [GPa] ν 12 ν 13 ν 21
Magnitude2.1871.8271.8300.3930.3920.328
Mechanical Properteis G 23 [GPa] G 13 [GPa] G 12 [GPa] ν 23 ν 31 ν 32
Magnitude0.6290.6370.6370.4410.3280.441
Table 4. Validation of simulations obtained via the two-step homogenization method and full FEA simulations on composite RVEs with different fiber orientations.
Table 4. Validation of simulations obtained via the two-step homogenization method and full FEA simulations on composite RVEs with different fiber orientations.
θ = 0 , α = 30 E 11 [GPa] E 22 [GPa] E 33 [GPa] ν 12 ν 13 ν 21
FEA2.0091.8711.8060.3690.4270.344
Two-step1.9291.8271.7760.3760.4210.356
Relative error3.95%2.31%1.69%1.84%1.46%3.57%
ν 23 ν 31 ν 32 G 23 [GPa] G 13 [GPa] G 12 [GPa]
FEA0.4150.3840.4000.6430.7190.645
Two-step0.4130.3870.4010.6340.7060.636
Relative error0.45%0.85%0.18%1.36%1.74%1.32%
θ = 45 , α = 30 E 11 [GPa] E 22 [GPa] E 33 [GPa] ν 12 ν 13 ν 21
FEA1.7861.7861.7770.3980.3960.398
Two-step1.7801.7801.7750.3980.3950.398
Relative error0.30%0.35%0.09%0.05%0.19%0.01%
ν 23 ν 31 ν 32 G 23 [GPa] G 13 [GPa] G 12 [GPa]
FEA0.3960.3940.3940.6660.6660.682
Two-step0.3950.3950.3950.6680.6680.686
Relative error0.14%0.16%0.28%0.32%0.36%0.59%
θ = 60 , α = 45 E 11 [GPa] E 22 [GPa] E 33 [GPa] ν 12 ν 13 ν 21
FEA1.7891.7841.8140.3940.3850.393
Two-step1.7861.7641.7970.3940.3860.391
Relative error0.19%1.11%0.92%0.08%0.23%0.53%
ν 23 ν 31 ν 32 G 23 [GPa] G 13 [GPa] G 12 [GPa]
FEA0.4010.3900.4070.6970.6560.650
Two-step0.4030.3870.4140.7080.6570.652
Relative error0.65%0.91%1.65%1.52%0.24%0.28%
θ = 90 , α = 60 E 11 [GPa] E 22 [GPa] E 33 [GPa] ν 12 ν 13 ν 21
FEA1.8231.7771.9380.4140.3550.404
Two-step1.8321.776E1.9500.4140.3530.404
Relative error0.46%0.06%0.62%0.00%0.43%0.07%
ν 23 ν 31 ν 32 G 23 [GPa] G 13 [GPa] G 12 [GPa]
FEA0.3850.3770.4190.6940.6350.633
Prediction0.3830.3730.4240.7140.6360.634
Relative error0.40%1.03%1.18%2.83%0.12%0.25%
Table 5. Accuracy comparison between the EML model and the three base learners on the training set and test set.
Table 5. Accuracy comparison between the EML model and the three base learners on the training set and test set.
ModelETXGBoostLGBMEML
TrainTestTrainTestTrainTestTrainTest
R 2 0.9810.9320.9830.9430.9840.9460.9880.952
MSE5.089 × 10 15 2.360 × 10 16 7.629 × 10 15 2.050 × 10 16 3.538 × 10 15 1.450 × 10 16 2.545 × 10 15 1.260 × 10 16
MAPE0.831%1.010%0.716%1.243%0.971%1.264%0.567%0.906%
Table 6. Accuracy evaluation of predictions by the EML model on a testing sample.
Table 6. Accuracy evaluation of predictions by the EML model on a testing sample.
Metrics C 11 C 12 C 13 C 14 C 15 C 16 C 22
R 2 0.999980.999990.999990.912730.955640.918640.99998
MSE1.99 × 10 16 1.36 × 10 16 1.09 × 10 16 8.8 × 10 15 2.57 × 10 15 9.94 × 10 15 2.38 × 10 16
MAPE0.336%0.533%0.708%0.671%0.773%1.063%0.347%
C 23 C 24 C 25 C 26 C 33 C 34 C 35
R 2 0.999990.926000.929180.942780.999960.941690.91682
MSE1.04 × 10 16 6.36 × 10 15 7.65 × 10 15 3.74 × 10 15 5.64 × 10 16 3.65 × 10 15 1.30 × 10 16
MAPE0.587%0.939%0.809%1.005%0.812%1.218%1.010%
C 36 C 44 C 45 C 46 C 55 C 56 C 66
R 2 0.918870.999290.909120.926980.999200.927950.99923
MSE1.18 × 10 16 4.46 × 10 16 1.8 × 10 15 1.5 × 10 15 5.04 × 10 15 2.31 × 10 15 4.82 × 10 15
MAPE1.114%0.917%1.215%1.981%0.924%1.179%0.881%
Table 7. The microstructural parameters of SFRPCs of PA15 and PA6GF35.
Table 7. The microstructural parameters of SFRPCs of PA15 and PA6GF35.
Polymer Composite E m [ GPa ] ν m E f [ GPa ] ν f da VF a11a22a33
PA15 from [46]2.80.47.00.213.531.850.0640.5070.4730.020
PA6GF35 from [47]3.00.47.20.2210250.1930.7860.1960.018
Table 8. Model efficiency comparison between the EML model and Digimat-FE calculations.
Table 8. Model efficiency comparison between the EML model and Digimat-FE calculations.
MethodSettingComputation Cost
Digimat-FE Simulaiton19,937 elements2911 s
EML predictiontraining 378 s
(10,800 samples)
<1 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Chen, Z.; Jian, X. A High-Generalizability Machine Learning Framework for Analyzing the Homogenized Properties of Short Fiber-Reinforced Polymer Composites. Polymers 2023, 15, 3962. https://doi.org/10.3390/polym15193962

AMA Style

Zhao Y, Chen Z, Jian X. A High-Generalizability Machine Learning Framework for Analyzing the Homogenized Properties of Short Fiber-Reinforced Polymer Composites. Polymers. 2023; 15(19):3962. https://doi.org/10.3390/polym15193962

Chicago/Turabian Style

Zhao, Yunmei, Zhenyue Chen, and Xiaobin Jian. 2023. "A High-Generalizability Machine Learning Framework for Analyzing the Homogenized Properties of Short Fiber-Reinforced Polymer Composites" Polymers 15, no. 19: 3962. https://doi.org/10.3390/polym15193962

APA Style

Zhao, Y., Chen, Z., & Jian, X. (2023). A High-Generalizability Machine Learning Framework for Analyzing the Homogenized Properties of Short Fiber-Reinforced Polymer Composites. Polymers, 15(19), 3962. https://doi.org/10.3390/polym15193962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop