Evaluating the Stress-Strain Relationship of the Additively Manufactured Lattice Structures

Extensive amount of research on additively manufactured (AM) lattice structures has been made to develop a generalized model that can interpret how strongly operational variables affect mechanical properties. However, the currently used techniques such as physics models and multi-physics simulations provide a specific interpretation of those qualities, and are not general enough to assess the mechanical properties of AM lattice structures of different topologies produced on different materials via several fabrication methods. To tackle this problem, this study develops an optimal deep learning (DL) model based on more than 4000 data points, which has been optimized by analyzing three different hyper-parameters optimization schemes including gradient boost regression trees (GBRT), gaussian process (GP), and random forest (RF) with different data distribution schemes such as normal distribution, nth root transformation, and robust scaler. With the robust scaler and nth root transformation, the accuracy of the model increases from R2 = 0.85 (for simple distribution) to R2 = 0.94 and R2 = 0.88, respectively. After feature engineering and data correlation, the stress, unit cell size, total height, width, and relative density are chosen to be the input parameters to model the strain. The optimal DL model is able to predict the strain of different topologies of lattices (such as circular, octagonal, Gyroid, truncated cube, Truncated cuboctahedron, Rhombic do-decahedron, and many others) with decent accuracy (R2 = 0.936, MAE = 0.05, and MSE = 0.025). The parametric sensitivity analysis and explainable artificial intelligence (by using DeepSHAP library) based insights confirm that stress is the most sensitive input to the strain followed by the relative density from the modeling perspective of the AM lattices. The findings of this study would be helpful for the industry and the researchers to design AM lattice structures of different topologies for various engineering applications.


Introduction
Due to its remarkable mechanical qualities and manufacturing capabilities (adaptability, flexibility, and adjustability), additive manufacturing (AM) has recently attracted the attention of academia and industry [1,2]. In AM, a structure can be constructed "layer by layer" into the desired shape. Complex items can now be manufactured using AM [3]. the mechanical parameter under consideration was then examined. Finally, the XAI was employed to generate interpretations for the DL predictions for each individual output and input parameter. The current study is an attempt to use artificial intelligence (AI) and explainable artificial intelligence (XAI) on AM lattice structures' data for modeling the stress-strain relationship (for interpretations of the AI model's predictions to better understand the trends and patterns as well as the quantitative and qualitative impacts of the input parameters in modeling the mechanical properties for the considered data range).

Materials and Methods
Initially, an experimental and simulation dataset of more than 4000 points has been collected from the literature (see Table 1). The collected data contains numerous topologies of lattice structures created on different materials such as metals and alloys, composites, ceramics, and polymers. The collected data clearly shows that different additive manufacturing methods including fused deposition modeling (FDM), multi-jet fusion (MJF), selective laser melting (SLM), stereolithography (SLA), direct metal laser sintering (DMLS), polyjet technology, and selective laser sintering (SLS) have been used to fabricate the considered lattice structures. Detailed references can be found in Table 1. The collected data is then used to develop a DL model.

Ref.
Lattice Structure AM Method Material Schematic [20] Circular, octagonal, kelvin, rhombicuboctahedron, cubic Multi Jet Fusion Polymer structures. As a result, several AM lattices fabricated by different AM methods, materials, and topologies have been investigated in relation to various inputs such as unit cell size, stress, breadth, total height, relative density, and width. Initially, Bayesian surrogate models were created in order to produce a highly accurate deep learning model (based on finetuned hyper-parameters). The impact of each investigated input parameter on the mechanical parameter under consideration was then examined. Finally, the XAI was employed to generate interpretations for the DL predictions for each individual output and input parameter. The current study is an attempt to use artificial intelligence (AI) and explainable artificial intelligence (XAI) on AM lattice structures' data for modeling the stress-strain relationship (for interpretations of the AI model's predictions to better understand the trends and patterns as well as the quantitative and qualitative impacts of the input parameters in modeling the mechanical properties for the considered data range).

Materials and Methods
Initially, an experimental and simulation dataset of more than 4000 points has been collected from the literature (see Table 1). The collected data contains numerous topologies of lattice structures created on different materials such as metals and alloys, composites, ceramics, and polymers. The collected data clearly shows that different additive manufacturing methods including fused deposition modeling (FDM), multi-jet fusion (MJF), selective laser melting (SLM), stereolithography (SLA), direct metal laser sintering (DMLS), polyjet technology, and selective laser sintering (SLS) have been used to fabricate the considered lattice structures. Detailed references can be found in Table 1. The collected data is then used to develop a DL model.

Ref.
Lattice Structure AM Method Material Schematic [20] Circular, octagonal, kelvin, rhombicuboctahedron, cubic Multi Jet Fusion Polymer [21] Gyroid Selective Laser Melting Metallic [22] Cubic, Diamond, Truncated cube, Truncated cuboctahedron, Rhombic dodecahedron, Rhombicuboctahedron SLM Ti6Al4V-ELI powder (according to ASTM B348, grade 23) on top of a solid titanium substrate [21] Gyroid Selective Laser Melting Metallic Micromachines 2023, 14, x FOR PEER REVIEW 3 of 24 structures. As a result, several AM lattices fabricated by different AM methods, materials, and topologies have been investigated in relation to various inputs such as unit cell size, stress, breadth, total height, relative density, and width. Initially, Bayesian surrogate models were created in order to produce a highly accurate deep learning model (based on finetuned hyper-parameters). The impact of each investigated input parameter on the mechanical parameter under consideration was then examined. Finally, the XAI was employed to generate interpretations for the DL predictions for each individual output and input parameter. The current study is an attempt to use artificial intelligence (AI) and explainable artificial intelligence (XAI) on AM lattice structures' data for modeling the stress-strain relationship (for interpretations of the AI model's predictions to better understand the trends and patterns as well as the quantitative and qualitative impacts of the input parameters in modeling the mechanical properties for the considered data range).

Materials and Methods
Initially, an experimental and simulation dataset of more than 4000 points has been collected from the literature (see Table 1). The collected data contains numerous topologies of lattice structures created on different materials such as metals and alloys, composites, ceramics, and polymers. The collected data clearly shows that different additive manufacturing methods including fused deposition modeling (FDM), multi-jet fusion (MJF), selective laser melting (SLM), stereolithography (SLA), direct metal laser sintering (DMLS), polyjet technology, and selective laser sintering (SLS) have been used to fabricate the considered lattice structures. Detailed references can be found in Table 1. The collected data is then used to develop a DL model. structures. As a result, several AM lattices fabricated by different AM methods, materials, and topologies have been investigated in relation to various inputs such as unit cell size, stress, breadth, total height, relative density, and width. Initially, Bayesian surrogate models were created in order to produce a highly accurate deep learning model (based on finetuned hyper-parameters). The impact of each investigated input parameter on the mechanical parameter under consideration was then examined. Finally, the XAI was employed to generate interpretations for the DL predictions for each individual output and input parameter. The current study is an attempt to use artificial intelligence (AI) and explainable artificial intelligence (XAI) on AM lattice structures' data for modeling the stress-strain relationship (for interpretations of the AI model's predictions to better understand the trends and patterns as well as the quantitative and qualitative impacts of the input parameters in modeling the mechanical properties for the considered data range).

Materials and Methods
Initially, an experimental and simulation dataset of more than 4000 points has been collected from the literature (see Table 1). The collected data contains numerous topologies of lattice structures created on different materials such as metals and alloys, composites, ceramics, and polymers. The collected data clearly shows that different additive manufacturing methods including fused deposition modeling (FDM), multi-jet fusion (MJF), selective laser melting (SLM), stereolithography (SLA), direct metal laser sintering (DMLS), polyjet technology, and selective laser sintering (SLS) have been used to fabricate the considered lattice structures. Detailed references can be found in Table 1. The collected data is then used to develop a DL model. Three different Bayesian surrogate models including the gaussian process (GP), random forest (RF), and gradient boosting regression trees (GBRT) were employed to finetune the hyper-parameters. The details about these hyper-parameters tuning procedures can be found in prior works [35,36]. These surrogate models were evaluated for three different cases. The first case was named a simple model in which the actual data was used without any transformation. In the first case, these models (RF, GP, and GBRT) were used without any transformation. However, in the second and third cases, the original data was transformed by robust scaler and nth root transformation. The robust scaler transforms the data to Gaussian distribution. Data distribution in a simple model and data transformation with a robust scaler are presented in Figure 1. An overview of the employed methods to find the optimal deep learning model is given in Table 2, and their corresponding accuracies are depicted in terms of correlation coefficient (R 2 ). [ Three different Bayesian surrogate models including the gaussian process (GP), random forest (RF), and gradient boosting regression trees (GBRT) were employed to finetune the hyper-parameters. The details about these hyper-parameters tuning procedures can be found in prior works [35,36]. These surrogate models were evaluated for three different cases. The first case was named a simple model in which the actual data was used without any transformation. In the first case, these models (RF, GP, and GBRT) were used without any transformation. However, in the second and third cases, the original data was transformed by robust scaler and nth root transformation. The robust scaler transforms the data to Gaussian distribution. Data distribution in a simple model and data transformation with a robust scaler are presented in Figure 1. An overview of the employed methods to find the optimal deep learning model is given in Table 2, and their corresponding accuracies are depicted in terms of correlation coefficient (R 2 ). [32] Triply periodic minimal surface (D and P surfaces) Three different Bayesian surrogate models including the gaussian process (GP), random forest (RF), and gradient boosting regression trees (GBRT) were employed to finetune the hyper-parameters. The details about these hyper-parameters tuning procedures can be found in prior works [35,36]. These surrogate models were evaluated for three different cases. The first case was named a simple model in which the actual data was used without any transformation. In the first case, these models (RF, GP, and GBRT) were used without any transformation. However, in the second and third cases, the original data was transformed by robust scaler and nth root transformation. The robust scaler transforms the data to Gaussian distribution. Data distribution in a simple model and data transformation with a robust scaler are presented in Figure 1. An overview of the employed methods to find the optimal deep learning model is given in Table 2, and their corresponding accuracies are depicted in terms of correlation coefficient (R 2 ). [34] CLS struts Fused deposition modeling PLA Three different Bayesian surrogate models including the gaussian process (GP), random forest (RF), and gradient boosting regression trees (GBRT) were employed to fine-tune the hyper-parameters. The details about these hyper-parameters tuning procedures can be found in prior works [35,36]. These surrogate models were evaluated for three different cases. The first case was named a simple model in which the actual data was used without any transformation. In the first case, these models (RF, GP, and GBRT) were used without any transformation. However, in the second and third cases, the original data was transformed by robust scaler and nth root transformation. The robust scaler transforms the data to Gaussian distribution. Data distribution in a simple model and data transformation with a robust scaler are presented in Figure 1. An overview of the employed methods to find the optimal deep learning model is given in Table 2 The given data is analyzed by quantile-quantile plots (Q-Q plots). It is a graphical plot used to compare two probability distributions. It is found that the data is negative or left skewed without transformation as shown in Figure 1a. If the original data without any transformation is used, the maximum correlation coefficient of 0.85 was found. However, when the data is transformed using a robust scaler, the negative skew distribution changed to no skew distribution (normal distribution) as shown in Figure 1b and hence increases its correlation coefficient to 0.94 (which is the best among the employed methods). However, with the nth root transformation, an R 2 value of 0.88 is obtained.
Gaussian process or GP is an unbounded group of stochastic processes with a constant joint Gaussian distribution in any of the bounded subsets (see Figure 2 for the framework). A mean function and a covariance function are used to represent a GP. The mean function is typically considered to be zero since the GP is a linear combination of random variables with a normal distribution [37,38]. GPR is among the most significant Bayesian ML techniques, and it is built on a very efficient technique for establishing a posterior distribution across a range of features. It is a non-linear regression approach that calculates posterior degradation estimates by limiting the posterior distribution to suit the  The given data is analyzed by quantile-quantile plots (Q-Q plots). It is a graphical plot used to compare two probability distributions. It is found that the data is negative or left skewed without transformation as shown in Figure 1a. If the original data without any transformation is used, the maximum correlation coefficient of 0.85 was found. However, when the data is transformed using a robust scaler, the negative skew distribution changed to no skew distribution (normal distribution) as shown in Figure 1b and hence increases its correlation coefficient to 0.94 (which is the best among the employed methods). However, with the nth root transformation, an R 2 value of 0.88 is obtained.
Gaussian process or GP is an unbounded group of stochastic processes with a constant joint Gaussian distribution in any of the bounded subsets (see Figure 2 for the framework). A mean function and a covariance function are used to represent a GP. The mean function is typically considered to be zero since the GP is a linear combination of random variables with a normal distribution [37,38]. GPR is among the most significant Bayesian ML techniques, and it is built on a very efficient technique for establishing a posterior distribution across a range of features. It is a non-linear regression approach that calculates posterior degradation estimates by limiting the posterior distribution to suit the given training data. It is adaptable enough to tackle issues with high dimensionality, limited sample sizes, nonlinear and complicated logistic regress.
GP is a stochastic process. It is defined as a set of random variables, of which a fixed number have a joint Gaussian distribution. The mean function µ(x) and covariance function C (x, x ) of a GP are entirely defined as, (1) A GP is defined as, where x, x ∈ X are random variables, and Gradient boosting, similar to the decision tree, is an aggregation approach that integrates several learning algorithms. A GBRT addressing prediction applications is a mix of gradient boosting and regression trees which employs groups of regression trees to minimize error across a huge single structure, as even the names suggest. The error values created by subtracting the estimated value from the intended target truth value in the first tree are given to the second successive tree. The error values obtained from subtracting the summation of the values obtained in the first and second trees from the primary target truth value are used to build the third tree. This procedure is continued till the parameters reach their maximum value. The final projected value is calculated by adding all the predicted values from decision trees.
The RF algorithm is a collection of binary trees based on two stochastic variables: a randomly generated bootstrap set and a randomly selected set of features for every node. The bootstrap set, which comprises the samples for creating a tree, and the out-ofbag (OOB) set, which contains test examples that are not included in bootstrap set, are used to create trees from the RF. The training instances are randomly sampled from the training set with replacement to create the bootstrap set. Each tree is trained and assessed using its own bootstrap set and OOB set. The splitting criterion used in each node aims to maximize the information gain. The RF employs just a limited number of variables (m tries ) from all available variables to analyze the information obtained (M). These m tries parameters are picked at random, and only these variables are used to optimize the splitting criterion. The RF optimization technique assesses the so-called OOB error whereas the classifier has been constructed. This error is the average from each tree's classification error on its individual OOB sets. The OOB error is an unbiased estimate of the classification's generalized error (GE). The relation of GE = ρ 1 S 2 − 1 (4) GE was proved by Breiman [39], where s is the ensemble's strength and ρ is the mean value of correlation. The GE minimum may be achieved by lowering the correlation amongst trees and raising the ensemble classification strength. The aim of RF parameter optimization is determined by these two opposing tendencies.
A GP is defined as, where x, x` ∈ X are random variables, and Gradient boosting, similar to the decision tree, is an aggregation approach that integrates several learning algorithms. A GBRT addressing prediction applications is a mix of gradient boosting and regression trees which employs groups of regression trees to minimize error across a huge single structure, as even the names suggest. The error values created by subtracting the estimated value from the intended target truth value in the first tree are given to the second successive tree. The error values obtained from subtracting the summation of the values obtained in the first and second trees from the primary target Convergence plots for the Bayesian surrogate models can be seen in Figure 3a. The convergence of the GBRT, GP, and RF models is shown with respect to the number of calls. Different investigated hyper-parameters (A hyperparameter in machine learning is a parameter that affects how the learning process is carried out.) are provided in Figure 3b and the framework of the optimal deep neural network (DNN) model is represented by Figure 4. The investigated hyper-parameters include learning rate, activation function, number of hidden layers, number of dense nodes in each hidden layer, optimizer, kernel initialization mode, batch size, epochs, etc. Based on the data distribution, transformation, and hyper-parameters tuning, an optimal deep learning model is attained, which has a learning rate of 0.003664218944047891, a single hidden layer containing 365 nodes. The optimal model uses "tanh" as the activation function, "glorot normal" as the initialization mode, "Adam" as the optimizer. The decay rate and batch size for the optimal model are "1e-06" and "200", respectively. set with replacement to create the bootstrap set. Each tree is trained and assessed using its own bootstrap set and OOB set. The splitting criterion used in each node aims to maximize the information gain. The RF employs just a limited number of variables (mtries) from all available variables to analyze the information obtained (M). These mtries parameters are picked at random, and only these variables are used to optimize the splitting criterion. The RF optimization technique assesses the so-called OOB error whereas the classifier has been constructed. This error is the average from each tree's classification error on its individual OOB sets. The OOB error is an unbiased estimate of the classification's generalized error (GE). The relation of GE ρ 1 (4) GE was proved by Breiman [39], where s is the ensemble's strength and ρ is the mean value of correlation. The GE minimum may be achieved by lowering the correlation amongst trees and raising the ensemble classification strength. The aim of RF parameter optimization is determined by these two opposing tendencies. Convergence plots for the Bayesian surrogate models can be seen in Figure 3a. The convergence of the GBRT, GP, and RF models is shown with respect to the number of calls. Different investigated hyper-parameters (A hyperparameter in machine learning is a parameter that affects how the learning process is carried out.) are provided in Figure  3b and the framework of the optimal deep neural network (DNN) model is represented by Figure 4. The investigated hyper-parameters include learning rate, activation function, number of hidden layers, number of dense nodes in each hidden layer, optimizer, kernel initialization mode, batch size, epochs, etc. Based on the data distribution, transformation, and hyper-parameters tuning, an optimal deep learning model is attained, which has a learning rate of 0.003664218944047891, a single hidden layer containing 365 nodes. The optimal model uses "tanh" as the activation function, "glorot normal" as the initialization mode, "Adam" as the optimizer. The decay rate and batch size for the optimal model are "1e-06" and "200", respectively.

Results
Results and discussion sections consists of data correlation (inputs and output), development of deep neural network (DNN), prediction and performance evaluation of DNN model followed by the parameter sensitivity analysis and interpretations of the DNN results by explainable artificial intelligence (XAI).

Correlating the Input and Output Features
The experimental data of inputs and output is correlated by using Pearson correlation. The input parameters including stress, unit cell size, total height, breadth, relative density, and width are correlated with each other and strain (output parameter) as shown in Figure 5. The details about this are available in a prior study [40]. Figure 5a illustrates the correlation between all of the considered input and out parameters while Figure 5b highlights the correlation of each individual input with the output parameter. As obvious, stress and strain are strongly correlated with each other. In addition, this correlation is positive (or direct). However, similar to the relative density, the dimensions such as height, width, breadth, and unit cell size are relatively weakly correlated with strain (see Figure 5). It can be noted that other than stress, the rest of the input parameters are negatively correlated with the strain for the considered data of additively manufactured lattice structures for a wide range of topologies, materials, and additive manufacturing methods.
highlights the correlation of each individual input with the output parameter. As obvious, stress and strain are strongly correlated with each other. In addition, this correlation is positive (or direct). However, similar to the relative density, the dimensions such as height, width, breadth, and unit cell size are relatively weakly correlated with strain (see Figure 5). It can be noted that other than stress, the rest of the input parameters are negatively correlated with the strain for the considered data of additively manufactured lattice structures for a wide range of topologies, materials, and additive manufacturing methods.

Predictive Performance of the Developed Neural Network Model
Based on the data correlation and feature engineering, stress, height, width, breadth, unit cell size, and relative density are taken as important and impactful parameters to model the strain of the additively manufactured lattice structures. The optimal framework of the NN is built based on the optimal set of hyper-parameters obtained by GP, GBRT, and RF optimization methods. It can be seen that the developed model predicts the strain

Predictive Performance of the Developed Neural Network Model
Based on the data correlation and feature engineering, stress, height, width, breadth, unit cell size, and relative density are taken as important and impactful parameters to model the strain of the additively manufactured lattice structures. The optimal framework of the NN is built based on the optimal set of hyper-parameters obtained by GP, GBRT, and RF optimization methods. It can be seen that the developed model predicts the strain with an R 2 = 0.936, MAE = 0.05, MSE = 0.025, and RD = 58%. Figure 6 clearly depicts that the model's predictions are more accurate for lower values of strain. This is due to the greater number of data points for this range of the strain. On the contrary, the model's predictive performance for higher values of strain is poor. This can be attributed to very few data points that have been used for the training. In general, the model's performance is satisfactory overall. This is because the predictions are made for a wide range of topologies of lattice structures, additive manufacturing methods, materials, and testing ranges.

Parameter Sensitivity Analysis
In order to find the parametric effect, a sensitivity analysis has been performed based on the model's predictions. The methodology adopted to perform the deep learning (DL) based parameter sensitivity analysis has been reported in the literature [41]. Herein, the considered input parameters such as stress, lattice dimensions (breadth, width, and total height), relative density, and unit cell size have been dropped from the inputs one by one to visualize their impact on the model's predictions (e.g., for strain). Figure 7 illustrates the predictive performance of the developed model by removing stress from the inputs. As the number of points between 0 to 16 is very high, so the predictions in this region are zoomed in and shown in another figure. In addition, training and validation losses of the newly developed model (without stress in the inputs) have been shown in Figure 7. Apparently, by removing stress from the inputs, the predictive performance of the developed model becomes very poor and inaccurate.

Parameter Sensitivity Analysis
In order to find the parametric effect, a sensitivity analysis has been performed based on the model's predictions. The methodology adopted to perform the deep learning (DL) based parameter sensitivity analysis has been reported in the literature [41]. Herein, the considered input parameters such as stress, lattice dimensions (breadth, width, and total height), relative density, and unit cell size have been dropped from the inputs one by one to visualize their impact on the model's predictions (e.g., for strain). Figure 7 illustrates the predictive performance of the developed model by removing stress from the inputs. As the number of points between 0 to 16 is very high, so the predictions in this region are zoomed in and shown in another figure. In addition, training and validation losses of the newly developed model (without stress in the inputs) have been shown in Figure 7. Apparently, by removing stress from the inputs, the predictive performance of the developed model becomes very poor and inaccurate. For example, the R 2 value drops from 0.94 to 0.6. This can also be observed from the scattering of the predicted points with respect to the experimental data. The unit cell size is also a very impactful parameter to model the strain of the considered data of additively manufactured lattice structures. This can be witnessed in Figure 8 where unit cell size has been removed from the inputs. It can be seen that the R 2 value drops from 0.94 to 0.88, and the predicted points are somehow close to the experimental data compared to Figure 7 where stress was removed from the inputs to model the strain. In addition, the training and validation loss plots in Figures 7 and 8 represent this difference.  For example, the R 2 value drops from 0.94 to 0.6. This can also be observed from the scattering of the predicted points with respect to the experimental data. The unit cell size is also a very impactful parameter to model the strain of the considered data of additively manufactured lattice structures. This can be witnessed in Figure 8 where unit cell size has been removed from the inputs. It can be seen that the R 2 value drops from 0.94 to 0.88, and the predicted points are somehow close to the experimental data compared to Figure 7 where stress was removed from the inputs to model the strain. In addition, the training and validation loss plots in Figures 7 and 8   For example, the R 2 value drops from 0.94 to 0.6. This can also be observed from the scattering of the predicted points with respect to the experimental data. The unit cell size is also a very impactful parameter to model the strain of the considered data of additively manufactured lattice structures. This can be witnessed in Figure 8 where unit cell size has been removed from the inputs. It can be seen that the R 2 value drops from 0.94 to 0.88, and the predicted points are somehow close to the experimental data compared to Figure 7 where stress was removed from the inputs to model the strain. In addition, the training and validation loss plots in Figures 7 and 8 represent this difference.  In order to highlight the impact of different lattice dimensions such as breadth, total height, and width, these parameters are removed from the inputs and the strain is modeled as depicted in Figure 9. In this case, R 2 drops to 0.91 from 0.94. It shows that these lattice dimensions are relatively less sensitive to the strain of the considered data compared to the stress and unit cell size. Also, the training and validation losses plot provides an evidence of this conclusion. In order to highlight the impact of different lattice dimensions such as breadth, total height, and width, these parameters are removed from the inputs and the strain is modeled as depicted in Figure 9. In this case, R 2 drops to 0.91 from 0.94. It shows that these lattice dimensions are relatively less sensitive to the strain of the considered data compared to the stress and unit cell size. Also, the training and validation losses plot provides an evidence of this conclusion. In the end, the impact of relative density on the predicted strain has been quantified (see Figure 10). In this case, relative density has been removed from the inputs list and then the strain has been modeled by using the rest of the input features. It can be noticed that the model's predicted accuracy becomes poor. For instance, the R 2 value drops from 0.94 (for original model) to 0.783. This is the second worst model. It means the relative density is the second most sensitive parameter for the strain after the stress. It can be concluded that stress is the most sensitive input to the strain followed by the relative density from the modeling perspective. In the end, the impact of relative density on the predicted strain has been quantified (see Figure 10). In this case, relative density has been removed from the inputs list and then the strain has been modeled by using the rest of the input features. It can be noticed that the model's predicted accuracy becomes poor. For instance, the R 2 value drops from 0.94 (for original model) to 0.783. This is the second worst model. It means the relative density is the second most sensitive parameter for the strain after the stress. It can be concluded that stress is the most sensitive input to the strain followed by the relative density from the modeling perspective.

Explainable Artificial Intelligence (XAI) Based Insights
In order to visualize the impact of a single input feature on the prediction ability of the model, a scatter plot known by the name dependence plot can be used. The dots represent the predictions made by the model. The features value is represented by X-axis and Y-axis represents the SHAP value of that feature, which indicates how the prediction changes with the value of the given feature. As seen from Figure 11, as the value of stress increases its shap value increases hence the greater impact on the prediction of the model. The interaction with another feature (chosen automatically by the SHAP dependence plot) is represented by the color of dots. The interaction feature with respect to stress is 'SLA' fabrication technology. The shap value of stress with respect to SLA lies at zero (shown in red dots) hence minimum impact on the prediction ability in comparison to other fabrication technologies. The unit cell size, total height, breadth, and width show similar behavior as most impact values lie between 0.0 and 0.2. The rest of the values have lower impact on the prediction of the model. The unit cell size and total height interacts with relative density whereas breadth and width interact with stress and unit cell size respectively. The effective values of relative density lie between 0.0 and 0.4 respectively whereas the rest of the values shows the least impact on the prediction ability of the model. The relative density is interacted with stress. The highest impact of the stress is 0.35 (normalized value between 0 and 1) which lies on the 0.1 value of the relative density.

Explainable Artificial Intelligence (XAI) Based Insights
In order to visualize the impact of a single input feature on the prediction ability of the model, a scatter plot known by the name dependence plot can be used. The dots represent the predictions made by the model. The features value is represented by X-axis and Y-axis represents the SHAP value of that feature, which indicates how the prediction changes with the value of the given feature. As seen from Figure 11, as the value of stress increases its shap value increases hence the greater impact on the prediction of the model. The interaction with another feature (chosen automatically by the SHAP dependence plot) is represented by the color of dots. The interaction feature with respect to stress is 'SLA' fabrication technology. The shap value of stress with respect to SLA lies at zero (shown in red dots) hence minimum impact on the prediction ability in comparison to other fabrication technologies. The unit cell size, total height, breadth, and width show similar behavior as most impact values lie between 0.0 and 0.2. The rest of the values have lower impact on the prediction of the model. The unit cell size and total height interacts with relative density whereas breadth and width interact with stress and unit cell size respectively. The effective values of relative density lie between 0.0 and 0.4 respectively whereas the rest of the values shows the least impact on the prediction ability of the model. The relative density is interacted with stress. The highest impact of the stress is 0.35 (normalized value between 0 and 1) which lies on the 0.1 value of the relative density.

Stress
Unit cell size

Relative density Total height
Breadth Width Figure 11. Dependence plots to interpret the predictions of the DL model.
In order to visualize the distribution of SHAP values, an embedding plot is used which is a 2D projection created by PCA (Principal Component Analysis) as shown in Figure 12. It actually visualizes the scattering of SHAP values for a given feature. The embedding plots have been created for all the input features (stress, unit cell size, relative density, total height, breadth, and width) as shown in Figure 12. The stress has the highest impact on the prediction ability as its range lies between −10 to 50. The second most influential parameter is relative density where the distribution of SHAP values lies between −10 to 30. The third most influential parameter is unit cell size where shap values lie between 0 to 7. The other parameters are the least influential as the SHAP values lie from 0 to 4. The importance of features is based on the impact of the feature on the prediction of the model. In order to visualize the distribution of SHAP values, an embedding plot is used which is a 2D projection created by PCA (Principal Component Analysis) as shown in Figure 12. It actually visualizes the scattering of SHAP values for a given feature. The embedding plots have been created for all the input features (stress, unit cell size, relative density, total height, breadth, and width) as shown in Figure 12. The stress has the highest impact on the prediction ability as its range lies between −10 to 50. The second most influential parameter is relative density where the distribution of SHAP values lies between −10 to 30. The third most influential parameter is unit cell size where shap values lie between 0 to 7. The other parameters are the least influential as the SHAP values lie from 0 to 4. The importance of features is based on the impact of the feature on the prediction of the model. The descending order of feature importance is depicted by a summary plot as shown in Figure 13. The x-axis represents the mean value shape and the y-axis represents the given features. Similar to dependence and embedding plots, summary plots also depict that stress is the most influential factor, relative density comes second, unit cell size comes third and dimensions are the least influential parameters. There are categorical features in the plot and this is due to the number of points of the given categorical feature. The descending order of feature importance is depicted by a summary plot as shown in Figure 13. The x-axis represents the mean value shape and the y-axis represents the given features. Similar to dependence and embedding plots, summary plots also depict that stress is the most influential factor, relative density comes second, unit cell size comes third and dimensions are the least influential parameters. There are categorical features in the plot and this is due to the number of points of the given categorical feature.  Figure 14 depicts the partial dependence plots of several input characteristics. The partial dependence plots demonstrate the influence of a factor while marginalizing all other features. The partial plots illustrate how a parameter should be modified to maximize the influence of a certain characteristic. The horizontal line shows the predicted value, whereas the vertical line reflects the feature's median. As seen in Figure 14, the most important parameter is stress whose impact increases as its value increases. The impact of relative density first decreases up to 0.4 and then increases up to 1. The impact of other parameters such as unit cell size, total height, breadth and width decreases with respect to its value.

Stress
Unit cell size Relative density Total height Figure 13. Summary plots to interpret the predictions of the DL model. Figure 14 depicts the partial dependence plots of several input characteristics. The partial dependence plots demonstrate the influence of a factor while marginalizing all other features. The partial plots illustrate how a parameter should be modified to maximize the influence of a certain characteristic. The horizontal line shows the predicted value, whereas the vertical line reflects the feature's median. As seen in Figure 14, the most important parameter is stress whose impact increases as its value increases. The impact of relative density first decreases up to 0.4 and then increases up to 1. The impact of other parameters such as unit cell size, total height, breadth and width decreases with respect to its value.  Figure 14 depicts the partial dependence plots of several input characteristics. The partial dependence plots demonstrate the influence of a factor while marginalizing all other features. The partial plots illustrate how a parameter should be modified to maximize the influence of a certain characteristic. The horizontal line shows the predicted value, whereas the vertical line reflects the feature's median. As seen in Figure 14, the most important parameter is stress whose impact increases as its value increases. The impact of relative density first decreases up to 0.4 and then increases up to 1. The impact of other parameters such as unit cell size, total height, breadth and width decreases with respect to its value.

Stress
Unit cell size Relative density Total height An additive force layout plot can be generated to visualize prediction by means of SHAP values (see Figure 15). It demonstrates which factors contributed how much favorably or adversely to the basic value in order to make a prediction. Figure 15a is clustered force plot or clustering shap values to visualize the force layout of all the test data. The clustered force plot is shown only for 18 test data for the clear demonstration. Figure 15be demonstrates individual force plots for 6 test points. The pink arrows reflect SHAP values that increase the prediction value (to the right), while the green arrows show those that decrease the prediction value (to the left). Each arrow's size shows the magnitude of the influence of the related characteristic. The "base value" is the model's average prediction over the test set. Figure 15a is summarized as follows 1. The predicted value for this observation is 4.65. 2. The base value is 0.09297 (without any effects). 3. The chosen structure is a diamond structure and material is Al2O3 ceramic slurry. 4. Stress is negatively related to prediction by the amount 0.05639.
Similarly, the other individual force plots can be summarized.
(a) Figure 14. Partial dependence plots to interpret the predictions of the DL model.
An additive force layout plot can be generated to visualize prediction by means of SHAP values (see Figure 15). It demonstrates which factors contributed how much favorably or adversely to the basic value in order to make a prediction. Figure 15a is clustered force plot or clustering shap values to visualize the force layout of all the test data. The clustered force plot is shown only for 18 test data for the clear demonstration. Figure 15b-e demonstrates individual force plots for 6 test points. The pink arrows reflect SHAP values that increase the prediction value (to the right), while the green arrows show those that decrease the prediction value (to the left). Each arrow's size shows the magnitude of the influence of the related characteristic. The "base value" is the model's average prediction over the test set. Figure 15a is summarized as follows: 1.
The predicted value for this observation is 4.65.

2.
The base value is 0.09297 (without any effects).

3.
The chosen structure is a diamond structure and material is Al 2 O 3 ceramic slurry.

4.
Stress is negatively related to prediction by the amount 0.05639.
Similarly, the other individual force plots can be summarized. From the above discussion and analysis, it can be observed that the employed method is applicable to a wide range of materials, AM methods, and lattice structures (see Figure 16).  From the above discussion and analysis, it can be observed that the employed method is applicable to a wide range of materials, AM methods, and lattice structures (see Figure 16).

Conclusions
The aim of this work is to evaluate the stress-strain relationship of various AM lattice topologies, to carry out a parameter sensitivity analysis, and to highlight comprehensible AI-based insights for modeling the strain of various AM lattice architectures. As a result, the relationship between a number of AM lattices constructed using various AM techniques, materials, and topologies, and various inputs, including unit cell size, stress, breadth, total height, relative density, and width, has been studied. The major findings are provided as follows. Parameter sensitivity analysis reveals that stress is the most sensitive input to the strain followed by the relative density from the modeling perspective. - The explainable artificial; intelligence (XAI) also confirms that the stress has the highest impact on the prediction ability as its range lies between −10 to 50. The second most influential parameter is relative density where distribution of SHAP values lies between −10 to 30. The third most influential parameter is unit cell size where SHAP values lies between 0 to 7. The other parameters are the least influential as the SHAP values lies from 0 to 4. The importance of features is based on the impact of the feature on the prediction of the model.
It is hoped that this methodology will assist researchers and industry in using datadriven techniques to provide preliminary information on the influential parameters prior to conducting trials. Furthermore, this methodology presents a modeling framework for assessing the mechanical properties of the additively created structures and materials under consideration. Although, the proposed scenario covers a wide data range of the additively manufactured lattice structures, yet, this methodology can be extended to additional additive manufacturing methods for other materials and architectures for a wide range of data.

Conflicts of Interest:
The authors declare no conflict of interest.