Next Article in Journal
Physical Layer Security: Channel Sounding Results for the Multi-Antenna Wiretap Channel
Previous Article in Journal
Deep-Learning-Based Classification of Cyclic-Alternating-Pattern Sleep Phases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compliance Prediction for Structural Topology Optimization on the Basis of Moment Invariants and a Generalized Regression Neural Network

1
School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092, China
2
Department of Aeronautics and Astronautics, Fudan University, Shanghai 200433, China
*
Authors to whom correspondence should be addressed.
Entropy 2023, 25(10), 1396; https://doi.org/10.3390/e25101396
Submission received: 28 August 2023 / Revised: 18 September 2023 / Accepted: 25 September 2023 / Published: 29 September 2023
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
Topology optimization techniques are essential for manufacturing industries, such as designing fiber-reinforced polymer composites (FRPCs) and structures with outstanding strength-to-weight ratios and light weights. In the SIMP approach, artificial intelligence algorithms are commonly utilized to enhance traditional FEM-based compliance minimization procedures. Based on an effective generalized regression neural network (GRNN), a new deep learning algorithm of compliance prediction for structural topology optimization is proposed. The algorithm learns the structural information using a fourth-order moment invariant analysis of the structural topology obtained from FEA at different iterations of classical topology optimization. A cantilever and a simply supported beam problem are used as ground-truth datasets, and the moment invariants are used as independent variables for input features. By comparing it with the well-known convolutional neural network (CNN) and deep neural network (DNN) models, the proposed GRNN model achieves a high prediction accuracy ( R 2 > 0.97) and drastically shortens the training and prediction cost. Furthermore, the GRNN algorithm exhibits excellent generalization ability on the prediction performance of the optimized topology with rotations and varied material volume fractions. This algorithm is promising for the replacement of the FEA calculation in the SIMP method, and can be applied to real-time optimization for advanced FRPC structure design.

1. Introduction

Fiber-reinforced polymer composites (FRPCs), which have outstanding material qualities such as high stiffness, strength, and strength-to-weight ratio, are favorable for applications and have gained a lot of attention from both academia and industries [1]. Manufacturing techniques of FRPCs stand for a multidisciplinary subject that requires proper design rules and topology optimizations [2].
Topology optimization is a powerful tool for helping engineers design innovative structures and products, and has already found successful applications in various energy fields [3,4]. In knowledge-driven theories, minimizing the mean compliance and elastic strain of the energy of a structure is the main objective function for the majority of structural topology optimization problems [5]. Entropy generation is often additionally used to assess topology optimization for high-performance structures in aeronautical engineering and automotive systems [6]. Greater efforts are being put into improving the robustness and effectiveness of optimization algorithms, and their applicability has been confirmed [7,8]. With regard to the topology optimization problems, numerous knowledge-driven strategies have been developed to achieve the optimal structure under given load and boundary conditions, manufacturing restraints, and performance objectives. Such models include the solid isotropic material with penalization (SIMP) [9], the level set [10], bi-directional evolutionary structural optimization (BESO) [11], and moving morphable components (MMC) [12,13].
The SIMP method predicts an optimal material distribution within a given design space, and has been considered the most widely used topology optimization scheme to explore optimal structures [14]. The SIMP method stands for one of the typical mesh-based approaches. The reference domain should first be discretized by creating a finite element mesh, followed by a finite element analysis (FEA) performed to obtain the calculation of the compliance for a topology configuration. The traditional methods devoted to topology optimization issues require considerably fine meshes to produce high-resolution designs; as a result, the FEA calculation can be difficult and time-consuming to implement. The cost of FEA computing and evaluating a compliance-based objective function under the volume constraint increases with the complexity (finer) of the finite element discretization.
A high-performance GUP solver is an efficient answer to low computing needs without sacrificing the quality of the structures, especially for large 3D design problems [15]. In addition, many researchers are increasingly interested in applying machine learning techniques to assist optimization procedures of compliance minimization. As numerical modeling naturally complements the ML technique by providing enormous amounts of relevant data, data creation is no longer a challenge in developing effective and robust data-driven models [16,17]. Ref. [18] suggested a K-means algorithm to reduce the dimension of the design variables, thus shortening the computational time. Ref. [19] established an implicit mapping model of shape function between the high resolution and a coarse resolution to reduce the FEA calculation significantly.
There have also been widespread attempts to speed the process of the convergence of the optimization by using powerful neural network (NN) models [20]. Convolutional neural networks (CNNs), a family of artificial neural networks that have gained dominance in a variety of computer vision tasks [21,22], are the most well-established deep learning algorithms in topology optimization tasks. Thus, the topology optimization problem is transformed from a design challenge to an image recognition problem. For the optimized topology configurations, researchers have proposed replacing the FEA calculation module with CNN models [23,24]. Furthermore, CNN models have been combined with generative adversarial network (GAN) algorithms [25] and long short-term memory networks (LSTM) algorithms [17,26] to accelerate the iteration procedures of the SIMP algorithm. The image-based CNN model achieves high accuracy in mapping the topology configuration to its compliance, while the training from scratch is usually time-consuming, and the black-box nature hinders its applicability [22].
In the fields of image processing and computer vision, the geometric features of an image can also be characterized using a set of measurable quantities known as moment invariants [27]. The moment invariants reflect the inherent properties of rotation, translation, and scale invariance. This calculation finds widespread use in tasks such as object recognition, tracking, and image feature extraction. Using invariants as features instead of the image itself, the selection of artificial neural networks can be flexible to replace the CNN model [24,28].
The radial basis function (RBF) has emerged as a variant of the artificial neural network, showing the principal advantages of easy design, good generalization, and faster training. GRNN [29] is one type of RBF, and it showcases the capability of quickly learning and rapidly converging to the optimal regression surface with a large number of datasets [30]. It has been demonstrated that the GRNN algorithm can more efficiently extract valuable information from images while reducing the training cost compared to the CNN models [28].
In this study, towards elucidating the correlation between compliance and optimized topology configurations, we propose a novel deep learning model based on moment invariant analysis of an efficient GRNN neural network. The paper is outlined as follows. Section 2 discusses the design problem and the existing SIMP method, and elaborates on mathematical calculations of moment invariants, along with the architecture of GRNN used in this study. To determine the suitable deep learning model, this paper compares the compliance prediction performance of the GRNN, DNN, and CNN algorithms. The SIMP simulation results are utilized for dataset generation to train the machine learning models. Section 3 evaluates the prediction model to justify the accuracy, efficiency, and reliability, with evaluations and comparison results with existing models presented. Section 4 gives concluding remarks and explains the limitations of the current framework, and future outlooks are also given.

2. Method

Figure 1 presents the flowchart of this work, which starts with the utilization of the SIMP method for data creation of structural topology configurations for the design problem. The images are processed to extract their moment invariants, which are then fed into GRNN models as features. In addition to the GRNN model, the CNN and DNN models are trained and optimized for the same design problem. In the last step, the machine learning models are assessed for the performance of accuracy, efficiency, and reliability.

2.1. The SIMP Method and Data Preparation

2.1.1. The SIMP Method

As a minimization compliance problem with combinations of variables, the SIMP procedure starts with the discretization of a given domain Ω into a grid of finite elements. A binary value of ρ e is assigned for each element, which is written as:
ρ e = 0 i f e Ω s Ω 1 i f e Ω s
The value of ρ e describes an element that is either filled with material for regions that require material or emptied of material for regions where material can be removed (representing voids).
The minimization function is subjected to the equilibrium equation and material volume fraction of the optimization material to the original design volume, with consideration of the design variable constraints. The function can be defined as:
m i n ρ : c ( ρ ) = U T K U = e = 1 N ρ e p u e T k 0 u e , 0 < ρ m i n ρ 1
s u b j e c t t o : e = 1 N v e ( ρ e ) / V 0 = V f
K U = F
where c is a compliance value, p is the penalty factor, and ρ m i n are lower bounds on densities, which are introduced to refrain from the singularity. U and F stand for the global displacement and force vectors, respectively. Here, the notation K is the global stiffness matrix; u e and k e denote the displacement vector and stiffness matrix of each element. v e is the volume of each element, V f is the target material volume fraction, and N is the total number of elements in the design domain ω , which also denotes the resolution of the topology image.
The implementation of SIMP involves multiple iterations. Each iteration produces a topology image, in accordance with four main modules performed: finite element analysis (FEA), analyzing sensitivities (Equations (2)–(4)), filtering sensitivities, and updating design variables, which produce topology images accordingly.
We analyze the time consumption in the SIMP method using the Python performance profiling tool Line profiler [31], as shown in Table 1. The higher computation effort is associated with the modules of FEA calculation and advanced filter techniques, especially when a refined mesh strategy is applied for describing the structural geometry with high resolution.

2.1.2. Data Generation via SIMP

We employ the SIMP algorithm to generate a dataset. As illustrated in Figure 2, two topology optimization tasks are considered, namely the simply supported beam in Figure 2a and the cantilever beam in Figure 2b.
The design domain has a dimension of 120 × 40 , with load F as a concentrated force. For each task, the loading position of [ F x , F y ] is defined as the distance between the load node and the center of the design domain within the x O y Cartesian coordinates, and material volume fraction V f comprises the primary constraints. For the simply supported beam, the load F can be applied to the nodes of the upper and lower surfaces, while the load of a cantilever beam can act on the surface of the free end. The material volume fraction V f follows a normal distribution f 0 N ( μ = 0.5 , σ = 0.1 ) , with a minimum value of 0.176 and a maximum value of 0.802.
As displayed in Figure 2b,d, toward minimizing compliance and obtaining the optimal topology configuration, the implementation of the SIMP algorithm updates the image in response to the FEA calculation in each iteration.
It is noticed that the 40 iterations record the topology images from blurry configurations to the convergence images; thus, we collect the FEA-determined compliances and their topology images at the first 40 iterations to construct the dataset. A similar strategy has been also addressed in [32] for the CNN model training.
In accordance with the SIMP implementations, a dataset is obtained with 20,000 images for the two tasks, in which the dataset’s completeness and diversity are guaranteed.
Figure A1 illustrates the image distributions with considered constraints. Of those, 80 % and 20 % are designated for training and testing, respectively.

2.2. Moment Invariants

The fundamental of moment invariants is to extract image information using a set of measurable quantities known as invariants. The geometric moment m is the most basic type of moment invariant.
For a two-dimensional image, the p + q order geometric moment is defined as follows:
m p q = + + x p y q f ( x , y ) d x d y
where (x, y) represents the coordinates of each pixel in the image within a Cartesian coordinate system, and f ( x , y ) is the density distribution function. Geometric moments can reflect numerous geometric features of the image, capturing the structural characteristics of the topology image. For example, the zero-order geometric moment m 00 describes the “mass” of the image, which is the area of a binary image. Similar to the “moment of inertia” in material mechanics, m 10 / m 00 and m 01 / m 00 are the centroid of the image; meanwhile, the second-order moments of m 20 and m 02 describe the mass distribution of the image.
Furthermore, the character of shape-preserving transformations like translation, scaling, rotation, and mirroring can be preserved by image invariants. The widely used Hu moments [27] consist of seven independent variables and are derived using only the first three-order geometric moments, which may not be sufficient for extracting enough structural information from topology images.
This study adopts the method derived from [33], which provides a set of invariant moments M of arbitrary order that is complete and independent. The calculation of invariant moments M can be derived using the complex moment c p q , which has the following expression:
c p q = k = 0 p j = 0 q p k q j ( 1 ) q j · i p + q k j · m p + q k j
The moment invariants M of order r are derived from all the complex moments c p q of order p + q , where p + q < r :
B = Φ ( p , q ) c p q c q 0 p 0 p q | ( p q ) ( 2 p + q r )
M R e = m R e = R e ( ϕ ) | ϕ B
M I m = m I m = I m ( ϕ ) | ( ϕ B ) ( p > q ) ( p p 0 q q 0 )
M = M R e M I m
where B is the complex form base set of r-order moment invariants M in Equation (7). According to Equations (8) and (10), extracting the real and unconjugated imaginary parts of all elements in set B constitutes the final representation of the r-order invariant moment M set.
Among them, q 0 and p 0 are indicators that satisfy the conditions for any allowed image. The base set defined by Equation (7) to Equation (10) depends on the selection of q 0 and p 0 . In practical applications, on the one hand, we hope to keep q 0 and p 0 as small as possible, because compared to higher-order moments, lower-order moments are not sensitive to noise. In addition, c q 0 p 0 approaching 0 will make the value of the invariant unstable.
In this study for dealing with topology images, based on the algorithm proposed by [33], taking p 0 = 2 , q 0 = 1 , and r = 4 , the set B containing seven complex elements is calculated using Equation (7). Then, according to Equation (8), we take the real part of all elements in set B, and take the imaginary part of some of them according to Equation (9), to obtain a four-order moment invariants base set M containing a total of 11 elements, as described in Equation (11):
M = M i | i = 1 , 2 , , 11 = M 1 = R e c 11 c 12 0 M 2 = R e c 20 c 12 2 M 3 = I m c 20 c 12 2 M 4 = R e c 21 c 12 1 M 5 = R e c 30 c 12 3 M 6 = I m c 30 c 12 3 M 7 = R e c 22 c 12 0 M 8 = R e c 31 c 12 2 M 9 = I m c 31 c 12 2 M 10 = R e c 40 c 12 4 M 11 = I m c 40 c 12 4

2.3. Generalized Regression Neural Network (GRNN)

As in Figure 3, the GRNN model has four layers, which are the input layer, pattern layer, summation layer, and output layer. The fundamental principle of GRNN is nonlinear regression analysis [29], and the mathematical formula can be written as:
y ^ = i = 1 N y i e x p ( ( x x i ) T ( x x i ) σ 2 ) i = 1 N e x p ( ( x x i ) T ( x x i ) σ 2 )
where y ^ ( x ) is the weighted average of all sample observations y i , and the weight factor of each observation y i is the exponent of the squared Euclid distance between the corresponding sample x i and x. σ is the key parameter of the smoothing factor. When σ has a large magnitude, the prediction y ^ ( x ) approximates the mean of all sample dependent variables, whereas s i g m a approximating 0 results in a prediction that is very near to the training sample.

2.4. Data Processing and Model Training

To obtain a reliable machine-learning-based prediction model, the three neural network models are designed to absorb the images to produce the corresponding compliances. The normalization technique [22] is applied first to process the input features and the predicted compliance.
For compliance prediction tasks, the GRNN model is designed in Figure 3. The topology images are processed to calculate the fourth-order moment invariants as input features using Equation (5) to Equation (10). In addition to the 11-moment invariants, the position of the loading position of ( F x , F y ) and material volume fraction V f are also chosen as the input feature of GRNN.
The input–output pairs for the DNN and CNN model, with the model structure, are detailed in Figure A2. It can be learned that the layer structure of the DNN model is different from the GRNN model, but it is intended to use the same input information as the GRNN model. The CNN model is designed to automatically and adaptively learn topology image information and to predict the compliance, with which the layer structure, parameters, and optimization techniques are used from [23].
The platform for creating datasets is configured with topy 0.4.0 in Python2.7. The network training platform is configured with pyGRNN 0.1.2, scikit-learn 1.0.2, CUDA 11.6, and torch 1.13.1+cu116 with Nvidia driver version 512.36 (GPU: Nvidia RTX3060).

3. Results and Discussion

3.1. Moment Invariant Calculations

The moment invariant is a feature extraction technique used to extract the global features for shape recognition and identification analysis. Figure 4 gives several classic topology configurations obtained from the dataset. The calculated 11-moment invariants for each case are summarized in Table 2. The 11 independent variables can cover the structural information of translation, scaling, and rotational under a set of loading conditions and material volume constraints.
For developing a deep learning-based image recognition algorithm, the moment invariants are much more efficient compared to the traditional image-processing techniques in CNN models.

3.2. Model Performance Evaluation

3.2.1. Model Accuracy

For regression problems, the loss functions mean squared error (MSE) and coefficient of determination R 2 are used to evaluate the model’s performance.
The MSE and R 2 can be computed as:
M S E = 1 n n i = 1 ( y i ^ y i ) 2
R 2 = 1 ( y ^ i y i ) 2 ( y ¯ y i ) 2
where y ^ i is the predicted values of compliances, y i is the ground-truth, and y ¯ is the average value of all compliances.
Since the model is trained on the dataset of the simply supported beam and the cantilever beam, Figure 5 first evaluates the performance of the GRNN model on the two tasks within the dataset. For the simply supported beam, the GRNN model achieves 0.981 and 0.970 on the training and testing datasets, respectively. The magnitudes are 0.992 and 0.970 on the cantilever beam.
Figure 6 offers a better insight into the learning performance of the DNN and CNN models by displaying graphs of loss and accuracy over each epoch for training and testing operations. It can be found that compared to the DNN model, the CNN model presents as a very powerful and efficient model that performs automatic feature extraction to achieve high accuracy in predicting topology compliance.

3.2.2. Model Efficiency

The finalized model parameters for GRNN are summarized in Table 3. Additionally, the parameter settings for DNN and CNN models are summarized in Table A3 and Table A4 in Appendix B, respectively.
Table 4 summarizes the prediction accuracy on training and testing sets and the computational efficiency of the three neural networks. In terms of prediction accuracy, it can be shown that all three models perform quite well with the R 2 higher than 0.96, with the GRNN model reaching a high precision of 0.998 and 0.994 on the training and testing datasets, respectively. The magnitudes are marginally lower than the CNN model, which achieves an R 2 of 0.999.
As for computational cost, the GRNN significantly overwhelms the other two neural network models. In particular, the training completion time for the GRNN algorithm is 1191.35 s, which is 30 times faster than the CNN model’s (38,662.40 s) time. Moreover, the well-trained GRNN model predicts topology compliances in 0.0019 s, which is a significant improvement over the CNN model’s 2.71 s. In comparison to the GRNN model, the DNN model achieves equivalent accuracy while having a training cost that is roughly 7 times higher and a prediction time that is 3 times slower. Overall, the GRNN model excels at obtaining high precision in predicting the compliances with the lowest computational cost.
It is evident that the well-known CNN model is a very effective tool for automatically extracting features for dealing with image recognition and segmentation problems, but it always comes with a significant training burden. As seen in Table A4, training a CNN model necessitates thousands of parameters. On the one hand, this raises the expense of training, and on the other, it turns the model into a black box with limited interpretability [22]. In contrast, the DNN model in Table A3 has a simpler layer structure and fewer parameters, which reduces training costs but makes certain accuracy tradeoffs.
Regarding the GRNN model in Table 3, it holds the simplest layer structure, with only one hyperparameter ( σ ), which is the major advantage over other types of NN models. Note that we explore a range of σ values before settling on the magnitude of 0.023, as in Table 3. Meanwhile, it maintains the highest accuracy attributed to the moment invariant analysis. With a large number of datasets, the GRNN model also has the intrinsic capacity to learn rapidly and quickly converge to the best parameter; therefore, it offers the lowest training cost and produces predictions quickly.

3.3. Model Generalizability

Using the SIMP algorithm, varied structural topology configurations are generated as unknown samples to examine the generalizability of the trained GRNN model.

3.3.1. Generalizability on Rotated Topology Configurations

As presented in Table 5 and Table 6, for the samples of simply supported beams and cantilever beams, the compliances predicted from the three neural network models are compared with the FEA-determined ground-truth values.
For each task, we select four configurations (specified as images [ 0 ]) that involve diverse constraints information of loading positions, material volume fractions, and iteration steps. Accordingly, for the simply supported beams, the FEA-determined structural compliances are 12.086, 23.871, 16.116, and 20.538, with the constraint details being found in Table A1 in the Appendix A. For the cantilever beams, the FEA-determined compliances are 213.518, 532.576, 228.132, and 168.211, with the constraints listed in Table A2 in the Appendix A.
Firstly, the three models demonstrate comparable generalization capability on the topology configurations (images [ 0 ]), while the GRNN model exhibits the best performance with a maximum relative error of less than 3.5%. Furthermore, when rotating those images clockwise 90, 180, and 270 degrees, it is worth noting that the time-consuming CNN model fails to learn the geometry transformations with a prediction error greater than 91%. In contrast, for the GRNN and DNN models, the rotation-invariance property based on the moment invariant analysis enables the network’s capability to predict the compliances regardless of the orientations. Thus, the predicted outcomes are the same compared to the configurations before rotation (images [ 0 ]).
Figure 7 further explores the model predictions along with iterations from 0 to 40. It can be seen that the developed GRNN model accurately captures the nature of minimization of compliance along the iterations. The comparison demonstrates another advantage of using moment invariants to extract structural information and then employing GRNN for predicting compliances. When the topological configurations are rotated, it appears that the CNN model entirely loses its ability to predict. However, GRNN may still sustain a comparatively high level of prediction accuracy owing to the rotational invariance of the moments.

3.3.2. Generalizability on Topology Configurations with Different Material Volume Fractions

It is known that the topology optimization follows compliance minimization criteria when subject to mechanical constraints of a certain load and material volume fraction. There are a handful of studies that explain that an increased material volume fraction produces a decreased compliance magnitude [34].
Figure 8 plots the predicted compliance in response to changed material volume fraction V f for the two topology optimization tasks. It can be found that the underlying mechanism of compliance variation and the input feature of material volume fraction is accurately learned by the GRNN model. The GRNN model exhibits outstanding generalization ability for predicting compliance in response to changed material volume fraction.

4. Conclusions and Outlook

In this work, a novel GRNN algorithm is developed to predict the topology compliance, together with the moment invariants as independent variables, which are obtained from the FEA-determined structural topology. The algorithm is trained using data produced by the SIMP approach, which takes into account two classic tasks—a cantilever beam and a simply supported beam—at various iteration stages. With a comparison to CNN and DNN models, the model is thoroughly evaluated using the metrics of accuracy, efficiency, and generalizability. Through the study, the following conclusions can be drawn.
(1) The prediction accuracy of the GRNN model in the training and testing sets is supported by R 2 > 0.97 and drastically shortens the training and prediction cost compared to the CNN and DNN models.
(2) Compared with the DNN and CNN algorithms, the GRNN model shows the best generalization ability on compliance prediction for structural topology with rotations and under different material volume fractions.
(3) The moment invariants have translation, scaling, and rotational invariance, and the evaluation demonstrates that using the moment invariants as features is important in determining a reliable deep learning model for compliance prediction in topology optimization problems.
In this study, the GRNN is considered a preferred algorithm with high accuracy, efficiency, and generalization ability in the application of exploring the underlying relationship between the structural topology and its compliance. However, we still acknowledge several limitations. The current work denotes an end-to-end paradigm to predict the structural compliances in the SIMP method, which represents a simplified classic isotropic scenario without considering the manufacturing constraints for fiber-reinforced composites. The generalizable deep learning method represents a potential strategy in aspects of replacing the FEA calculation for high-resolution problems within the traditional topology optimization framework. Future efforts can be improved by involving constraints of the placement of path planning of fibers, which is essential in customizing the mechanical anisotropy in manufacturing, with the traditional numerical implementation being more costly.
The current work is promising in that it can be extended to the revised design issues and generate real-time optimization for advanced FRPC structure design [35,36]. It can also be utilized together with fiber printing techniques for applications in additive manufacturing [37].

Author Contributions

Y.Z.: conceptualization, methodology, software, writing—original draft, and review. Z.C.: software, data curation, visualization, writing—review and editing. Y.D.: conceptualization, methodology, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The work is financially supported by the National Natural Science Foundation of China (12302184), Shanghai Pujiang Talent Program (22PJ1413800), Shanghai Natural Science Fund (22ZR1404500), and Fundamental Research Funds for the Central Universities.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the corresponding authors.

Conflicts of Interest

The authors declare that they have no conflict of interest to report in this paper.

Appendix A. Topology Configurations Used for Model Generalization Evaluation

Figure A1. Demonstration of the dataset in accordance with SIMP implementations: showing the constraints and topology configurations at iterations from 0 to 40.
Figure A1. Demonstration of the dataset in accordance with SIMP implementations: showing the constraints and topology configurations at iterations from 0 to 40.
Entropy 25 01396 g0a1
Table A1. The loading position, material volume fraction, and iteration of the topology configurations in accordance with Table 5.
Table A1. The loading position, material volume fraction, and iteration of the topology configurations in accordance with Table 5.
Images [ 0 ]Load PositionMaterial Volume FractionIteration
Entropy 25 01396 i0093290.451010
Entropy 25 01396 i01022970.561120
Entropy 25 01396 i01136080.620630
Entropy 25 01396 i01211480.373840
Table A2. The loading position, material volume fraction, and iteration of the topology configurations in accordance with Table 6.
Table A2. The loading position, material volume fraction, and iteration of the topology configurations in accordance with Table 6.
Images [ 0 ]Load PositionMaterial Volume FractionIteration
Entropy 25 01396 i01349580.514310
Entropy 25 01396 i01449240.332820
Entropy 25 01396 i01549600.676630
Entropy 25 01396 i01649480.548340

Appendix B. Deep Neural Networks of DNN and CNN

Figure A2. Illustration of model architecture for neural network models of (a) DNN and (b) CNN.
Figure A2. Illustration of model architecture for neural network models of (a) DNN and (b) CNN.
Entropy 25 01396 g0a2
The DNN model absorbs 14 features as inputs and it is finalized with three hidden layers with the batch size setting as 71, the learning rate setting as 0.0001, and the epoch setting as 400. The CNN model treats it as a topology image recognition problem and takes the image with a resolution of 210 × 543 . As described in [23], the model has a batch size of 32, a learning rate of 0.001, and epochs of 300, with the structure finalized with two convolutional layers and two max-pooling layers, and three hidden layers coming after the flatten layer. For DNN and CNN models, the number of parameters for each input and output layer within its structure is summarized in Table A3 and Table A4, respectively.
Table A3. Parameter settings of the DNN neural network model.
Table A3. Parameter settings of the DNN neural network model.
Layer (Type)Input ShapeOutput Shape
Input14248
Hidden1248253
Hidden2-1253253
Hidden2-2253253
Hidden2-3253253
Output2531
Table A4. Parameter settings of the CNN neural network model.
Table A4. Parameter settings of the CNN neural network model.
Layer (Type)Input ShapeOutput Shape
CONV1121,6323,600,896
Max pooling13,600,896898,560
CONV2898,5601,725,888
Max pooling21,725,888425,600
Flatten425,600425,600
Hidden1425,600400
Hidden2400200
Hidden32001

References

  1. Rajak, D.K.; Pagar, D.D.; Menezes, P.L.; Linul, E. Fiber-reinforced polymer composites: Manufacturing, properties, and applications. Polymers 2019, 11, 1667. [Google Scholar] [CrossRef]
  2. Wong, J.; Altassan, A.; Rosen, D.W. Additive manufacturing of fiber-reinforced polymer composites: A technical review and status of design methodologies. Compos. Part B Eng. 2023, 146, 110603. [Google Scholar] [CrossRef]
  3. Pham, T.; Kwon, P.; Foster, S. Additive manufacturing and topology optimization of magnetic materials for electrical machines—A review. Energies 2021, 14, 283. [Google Scholar] [CrossRef]
  4. Zhao, Z.L.; Rong, Y.; Yan, Y.; Feng, X.Q.; Xie, Y.M. A subdomain-based parallel strategy for structural topology optimization. Acta Mech. Sin. 2023, 39, 422357. [Google Scholar] [CrossRef]
  5. Zhang, W.; Yang, J.; Xu, Y.; Gao, T. Topology optimization of thermoelastic structures: Mean compliance minimization or elastic strain energy minimization. Struct. Multidiscip. Optim. 2014, 49, 417–429. [Google Scholar] [CrossRef]
  6. Charoen-amornkitt, P.; Alizadeh, M.; Suzuki, T.; Tsushima, S. Entropy generation analysis during adjoint variable-based topology optimization of porous reaction-diffusion systems under various design dimensionalities. Int. J. Heat Mass Transf. 2023, 202, 123725. [Google Scholar] [CrossRef]
  7. Jafari-Asl, J.; Seghier, M.E.A.B.; Ohadi, S.; van Gelder, P. Efficient method using Whale Optimization Algorithm for reliability-based design optimization of labyrinth spillway. Appl. Soft Comput. 2021, 101, 107036. [Google Scholar] [CrossRef]
  8. Keshtegar, B.; Meng, D.; Ben Seghier, M.E.A.; Xiao, M.; Trung, N.T.; Bui, D.T. A hybrid sufficient performance measure approach to improve robustness and efficiency of reliability-based design optimization. Eng. Comput. 2021, 37, 1695–1708. [Google Scholar] [CrossRef]
  9. Bendsøe, M.P. Optimal shape design as a material distribution problem. Struct. Multidiscip. Optim. 1989, 1, 193–202. [Google Scholar] [CrossRef]
  10. Allaire, Grégoire and Jouve, François and Toader, Anca-Maria. A level-set method for shape optimization. C. R. Math. 2002, 334, 1125–1130. [CrossRef]
  11. Xie, Y.; Steven, G. A simple evolutionary procedure for structural optimization. Comput. Struct. 1993, 49, 885–896. [Google Scholar] [CrossRef]
  12. Guo, X.; Zhang, W.; Zhong, W. Doing topology optimization explicitly and geometrically—A new moving morphable components based framework. J. Appl. Mech. 2014, 81, 081009. [Google Scholar] [CrossRef]
  13. Liu, C.; Zhu, Y.; Sun, Z.; Li, D.; Du, Z.; Zhang, W.; Guo, X. An efficient moving morphable component (MMC)-based approach for multi-resolution topology optimization. Struct. Multidiscip. Optim. 2018, 58, 2455–2479. [Google Scholar] [CrossRef]
  14. Rozvany, G. The SIMP method in topology optimization-theoretical background, advantages and new applications. In Proceedings of the 8th Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA, USA, 6–8 September 2000; p. 4738. [Google Scholar]
  15. Zegard, T.; Paulino, G.H. Toward GPU accelerated topology optimization on unstructured meshes. Struct. Multidiscip. Optim. 2013, 48, 473–485. [Google Scholar] [CrossRef]
  16. Alber, M.; Buganza Tepole, A.; Cannon, W.R.; De, S.; Dura-Bernal, S.; Garikipati, K.; Karniadakis, G.; Lytton, W.W.; Perdikaris, P.; Petzold, L.; et al. Integrating machine learning and multiscale modeling—Perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit. Med. 2019, 2, 115. [Google Scholar] [CrossRef] [PubMed]
  17. Zhao, Y.; Chen, Z.; Dong, Y.; Tu, J. An interpretable LSTM deep learning model predicts the time-dependent swelling behavior in CERCER composite fuels. Mater. Today Commun. 2023, 37, 106998. [Google Scholar] [CrossRef]
  18. Chen, Z.; Liu, W. An efficient parameter adaptive support vector regression using K-means clustering and chaotic slime mould algorithm. IEEE Access 2020, 8, 156851–156862. [Google Scholar] [CrossRef]
  19. Huang, M.; Du, Z.; Liu, C.; Zheng, Y.; Cui, T.; Mei, Y.; Li, X.; Zhang, X.; Guo, X. Problem-independent machine learning (PIML)-based topology optimization—A universal approach. Extrem. Mech. Lett. 2022, 56, 101887. [Google Scholar] [CrossRef]
  20. Kallioras, N.A.; Kazakis, G.; Lagaros, N.D. Accelerated topology optimization by means of deep learning. Struct. Multidiscip. Optim. 2020, 62, 1185–1212. [Google Scholar] [CrossRef]
  21. Dong, Y.; Tao, J.; Zhang, Y.; Lin, W.; Ai, J. Deep learning in aircraft design, dynamics, and control: Review and prospects. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 2346–2368. [Google Scholar] [CrossRef]
  22. Li, Z.; Zhang, Y.; Ai, J.; Zhao, Y.; Yu, Y.; Dong, Y. A Lightweight and Explainable Data-driven Scheme for Fault Detection of Aerospace Sensors. IEEE Trans. Aerosp. Electron. Syst. 2023; early access. [Google Scholar] [CrossRef]
  23. Lee, S.; Kim, H.; Lieu, Q.X.; Lee, J. CNN-based image recognition for topology optimization. Knowl.-Based Syst. 2020, 198, 105887. [Google Scholar] [CrossRef]
  24. Yan, J.; Zhang, Q.; Xu, Q.; Fan, Z.; Li, H.; Sun, W.; Wang, G. Deep learning driven real time topology optimisation based on initial stress learning. Adv. Eng. Inform. 2022, 51, 101472. [Google Scholar] [CrossRef]
  25. Li, B.; Huang, C.; Li, X.; Zheng, S.; Hong, J. Non-iterative structural topology optimization using deep learning. Comput.-Aided Des. 2019, 115, 172–180. [Google Scholar] [CrossRef]
  26. Rade, J.; Balu, A.; Herron, E.; Pathak, J.; Ranade, R.; Sarkar, S.; Krishnamurthy, A. Algorithmically-consistent deep learning frameworks for structural topology optimization. Eng. Appl. Artif. Intell. 2021, 106, 104483. [Google Scholar] [CrossRef]
  27. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  28. Dong, Y.; Huang, J.; Ai, J. Visual perception-based target aircraft movement prediction for autonomous air combat. J. Aircr. 2015, 52, 538–552. [Google Scholar] [CrossRef]
  29. Specht, D.F. A general regression neural network. IEEE Trans. Neural Netw. 1991, 2, 568–576. [Google Scholar] [CrossRef]
  30. Zheng, X.; Yang, R.; Wang, Q.; Yan, Y.; Zhang, Y.; Fu, J.; Liu, Z. Comparison of GRNN and RF algorithms for predicting heat transfer coefficient in heat exchange channels with bulges. Appl. Therm. Eng. 2022, 217, 119263. [Google Scholar] [CrossRef]
  31. Siqueira, A.S.; da Silva, R.C.; Santos, L.R. Perprof-py: A python package for performance profile of mathematical optimization software. J. Open Res. Softw. 2016, 4, e12. [Google Scholar] [CrossRef]
  32. Sosnovik, I.; Oseledets, I. Neural networks for topology optimization. Russ. J. Numer. Anal. Math. Model. 2019, 34, 215–223. [Google Scholar] [CrossRef]
  33. Flusser, J.; Zitova, B.; Suk, T. Moments and Moment Invariants in Pattern Recognition; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  34. Guest, J.K. Imposing maximum length scale in topology optimization. Struct. Multidiscip. Optim. 2009, 37, 463–473. [Google Scholar] [CrossRef]
  35. Sattari, K.; Xie, Y.; Lin, J. Data-driven algorithms for inverse design of polymers. Soft Matter 2021, 17, 7607–7622. [Google Scholar] [CrossRef] [PubMed]
  36. Hao, X.; Zhang, G.; Deng, T. Improved Optimization of a Coextrusion Die with a Complex Geometry Using the Coupling Inverse Design Method. Polymers 2023, 15, 3310. [Google Scholar] [CrossRef] [PubMed]
  37. Liu, J.; Gaynor, A.T.; Chen, S.; Kang, Z.; Suresh, K.; Takezawa, A.; Li, L.; Kato, J.; Tang, J.; Wang, C.C.; et al. Current and future trends in topology optimization for additive manufacturing. Struct. Multidiscip. Optim. 2018, 57, 2457–2483. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the work.
Figure 1. The flowchart of the work.
Entropy 25 01396 g001
Figure 2. Illustration of topology optimization for two tasks of (a) simply supported beam and its compliance iteration plot in (b); (c) a cantilever beam and its compliance iteration plot in (d).
Figure 2. Illustration of topology optimization for two tasks of (a) simply supported beam and its compliance iteration plot in (b); (c) a cantilever beam and its compliance iteration plot in (d).
Entropy 25 01396 g002
Figure 3. The GRNN framework uses the moment invariants of topology configuration as input features to predict the compliances.
Figure 3. The GRNN framework uses the moment invariants of topology configuration as input features to predict the compliances.
Entropy 25 01396 g003
Figure 4. Calculation of moment invariants with six classic structural topology configurations, which include the simply supported beam in (ac) and the cantilever beam in (df) at iterations of 10, 20, and 40.
Figure 4. Calculation of moment invariants with six classic structural topology configurations, which include the simply supported beam in (ac) and the cantilever beam in (df) at iterations of 10, 20, and 40.
Entropy 25 01396 g004
Figure 5. Evaluation of the GRNN performance (a) on simply supported beams with the training set and (c) the testing set; (b) on cantilever beams on the training set and (d) testing set.
Figure 5. Evaluation of the GRNN performance (a) on simply supported beams with the training set and (c) the testing set; (b) on cantilever beams on the training set and (d) testing set.
Entropy 25 01396 g005
Figure 6. Plots of MSE and R 2 with epochs of the (a) DNN model and (b) CNN model.
Figure 6. Plots of MSE and R 2 with epochs of the (a) DNN model and (b) CNN model.
Entropy 25 01396 g006
Figure 7. Model generalization on a simply supported beam with the topology configuration (a) 0 degree and (b) 180 degree clockwise rotation; and on a cantilever beam with the topology configuration (c) 0 degree and (d) 180 degree clockwise rotation.
Figure 7. Model generalization on a simply supported beam with the topology configuration (a) 0 degree and (b) 180 degree clockwise rotation; and on a cantilever beam with the topology configuration (c) 0 degree and (d) 180 degree clockwise rotation.
Entropy 25 01396 g007
Figure 8. Investigation of the GRNN predicted compliance with different volume fractions for (a) a simply supported beam and (b) a cantilever beam.
Figure 8. Investigation of the GRNN predicted compliance with different volume fractions for (a) a simply supported beam and (b) a cantilever beam.
Entropy 25 01396 g008
Table 1. Computing proportion for each module of the SIMP method.
Table 1. Computing proportion for each module of the SIMP method.
Module in SIMPComputational Cost
FEA30.3%
Sensitivities0.1%
Filter67.4%
Variables update2.2%
Table 2. Calculations of moment invariants of different topology configurations as in Figure 4.
Table 2. Calculations of moment invariants of different topology configurations as in Figure 4.
Moment InvariantsCase aCase bCase cCase dCase eCase f
m10.0116250.0118210.0116680.0103110.0106830.010505
m20.0007670.0007520.0008320.0004160.0004580.000481
m30.0004990.0005060.000535−0.000240−0.000278−0.000276
m40.0002370.0002280.0002660.0000970.0001100.000119
m50.0002290.0002220.0002560.0001010.0001150.000124
m6−0.000106−0.000126−0.000145−0.000057−0.000065−0.000066
m70.0001660.0001690.0001690.0001250.0001300.000129
m80.0001940.0001900.0002110.0000990.0001090.000114
m90.0001200.0001230.000130−0.000057−0.000065−0.000065
m100.0001790.0001730.0001980.0000860.0000960.000102
m110.0001520.0001520.000166−0.000071−0.000081−0.000083
Table 3. Parameter settings for the GRNN model, with the spread σ being the only hyperparameter.
Table 3. Parameter settings for the GRNN model, with the spread σ being the only hyperparameter.
P (Input Vectors)T (Target Class Vectors)Spread σ
[ 15,242 × 14 ] [ 15,242 × 1 ] 0.023
Table 4. Model performance evaluation of accuracy ( R 2 ) and computational cost of the three neural networks.
Table 4. Model performance evaluation of accuracy ( R 2 ) and computational cost of the three neural networks.
ModelPrediction AccuracyComputational Efficiency
Training SetTesting SetTrainingPrediction
GRNN0.9980.9941191.35 s0.0019 s
DNN0.9910.9647128.01 s0.0061 s
CNN0.9990.99938662.40 s2.71 s
Table 5. Model generalization on unknown samples of simply supported beams.
Table 5. Model generalization on unknown samples of simply supported beams.
Images [0°]FEA CalculationCompliance Predictions
GRNNDNNDNN
[0°][90°][180°][270°]
Entropy 25 01396 i00112.08612.12811.84211.51626.55522.24055.067
0.352%2.017%4.716%119.719%84.013%355.633%
Entropy 25 01396 i00223.87124.68424.75823.29730.56711.78533.911
3.408%3.716%2.405%28.051%50.630%42.062%
Entropy 25 01396 i00316.11616.12715.92315.62720.68823.76550.847
0.064%1.020%3.035%28.364%47.464%215.516%
Entropy 25 01396 i00420.53820.60721.13419.82638.33014.00630.903
0.334%2.753%3.468%86.626%31.806%50.465%
Table 6. Model generalization on unknown samples of the cantilever beams.
Table 6. Model generalization on unknown samples of the cantilever beams.
Images [0°]FEA CalculationCompliance Predictions
GRNNDNNDNN
[0°][90°][180°][270°]
Entropy 25 01396 i005213.518208.998206.768213.86739.191339.14839.817
2.117%3.161%0.163%81.645%58.838%81.352%
Entropy 25 01396 i006532.576546.824515.810537.99530.53990.56535.502
2.675%3.148%1.017%94.266%82.995%93.334%
Entropy 25 01396 i007228.132228.900232.161225.36812.440110.09919.820
0.337%1.766%1.211%94.547%51.739%91.312%
Entropy 25 01396 i008168.211168.694167.335167.39110.93990.53613.731
0.287%0.521%0.488%93.497%46.177%91.837%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Chen, Z.; Dong, Y. Compliance Prediction for Structural Topology Optimization on the Basis of Moment Invariants and a Generalized Regression Neural Network. Entropy 2023, 25, 1396. https://doi.org/10.3390/e25101396

AMA Style

Zhao Y, Chen Z, Dong Y. Compliance Prediction for Structural Topology Optimization on the Basis of Moment Invariants and a Generalized Regression Neural Network. Entropy. 2023; 25(10):1396. https://doi.org/10.3390/e25101396

Chicago/Turabian Style

Zhao, Yunmei, Zhenyue Chen, and Yiqun Dong. 2023. "Compliance Prediction for Structural Topology Optimization on the Basis of Moment Invariants and a Generalized Regression Neural Network" Entropy 25, no. 10: 1396. https://doi.org/10.3390/e25101396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop