Next Article in Journal
Codings of B-Integers in Cantor Numeration Systems as Generators of Aperiodic Potentials
Previous Article in Journal
GEAR-RRT*: A Path Planning Algorithm for Complex Environments with Adaptive Informed-Ellipse Sampling and Layered Expansion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Geometric-Enhanced Neural Network Method for Scalable and High-Resolution Topology Optimization

1
Zhejiang Zhoushan Ocean Power Transmission Research Institute Co., Ltd., Zhoushan 316000, China
2
Key Lab of Smart Prevention and Mitigation of Civil Engineering Disaster of the Ministry of Industry and Information Technology, Harbin 150090, China
3
School of Civil Engineering, Harbin Institute of Technology, Harbin 150090, China
*
Authors to whom correspondence should be addressed.
Symmetry 2026, 18(3), 537; https://doi.org/10.3390/sym18030537
Submission received: 6 February 2026 / Revised: 11 March 2026 / Accepted: 19 March 2026 / Published: 21 March 2026
(This article belongs to the Special Issue Intelligent Modeling of Fluid and Structure)

Abstract

Topology optimization is a powerful methodology for designing lightweight, economical, and efficient structures. However, traditional approaches often face challenges such as numerical instabilities and high computational costs, limiting their practical applicability. Recently, radial basis function (RBF)-based and neural network-based methods have emerged as promising alternatives through the reparameterization of the density field. Despite their potential, these methods typically rely on isotropic basis functions or static feature encodings, which limit their ability to capture fine-scale structural details, particularly in high-aspect-ratio features such as slender bar-like members and in geometrically symmetric structural patterns. To address this research gap, this paper introduces a novel Geometric-enhanced Neural Network (GeNN) for topology optimization based on Anisotropic Radial Basis Functions (ARBFs). By embedding ARBFs into the neural network framework, the proposed method provides a more geometrically informed density representation and inherently suppresses checkerboard patterns without additional filtering techniques. The proposed GeNN framework is thoroughly validated on benchmark problems, including several representative symmetric structural layouts, demonstrating improved computational efficiency compared to traditional methods and other neural-network-based topology optimization methods. In addition, the proposed method demonstrates strong scalability across various optimization problems. Notably, GeNN successfully optimized a 256 m-long bridge involving millions of degrees of freedom within ten minutes on a standard personal computer. This advancement demonstrates the practical potential of the proposed method for large-scale civil engineering applications.

1. Introduction

Since its inception in the pioneering work [1], topology optimization has attracted substantial attention as a powerful tool for achieving optimal structural performance. The primary objective of topology optimization is to determine the optimal material distribution within a given design domain, subject to a prescribed volume fraction, in order to minimize a specified objective function [2,3,4]. Based on topology optimization methods, a structural designer can obtain multiple conceptual design solutions. Among the numerous methods, the most popular one is the Solid Isotropic Material with Penalization (SIMP) method, which is a density-based topology optimization [2]. This method has been extensively adopted to obtain the optimized results in the fields of aircraft design [5], bridge design [6,7,8], architectural and structural design [9,10,11], and car design [12]. Despite the success of the methods, several challenges remain to be addressed, particularly regarding numerical instabilities (such as checkerboard patterns and blurred structural boundaries) and computational efficiency.
To avoid obtaining numerical instability, filtering techniques are often employed [13,14] in density-based topology optimization methods, including density and sensitivity filtering techniques. These methods typically require information from neighboring elements, which can be difficult to obtain in the case of fine meshes or complex domains and geometries [15]. Meanwhile, the application of filtering techniques can also introduce new challenges. For example, when a larger filter radius is employed, it often leads to the emergence of numerous gray elements. To obtain clear boundaries, level-set methods were proposed to produce clear and smooth boundaries without the need for additional post-processing [3,16,17]. Nevertheless, these methods have inherent limitations, particularly the increased computational complexity and cost associated with solving the evolution of level-set functions.
With the rapid development of machine learning (ML), a number of studies have been conducted to address these challenges. For example, Yu et al. employed an artificial neural network to predict an optimized structure based on given boundary conditions and optimization settings [18]. Deep belief networks (DBNs) were used to transform the input data into a higher-level representation, thereby reducing the number of iterations [19]. Qian et al. proposed an explicit level-set approach based on the topology description function (radial basis function)-enhanced neural network for topology optimization [7], where the traditional topology optimization method is transformed into an update process for the network parameters. Li and Ye further introduced a hybrid approach that integrates the method of moving asymptotes and the Adam optimizer for updating the parameters of a radial basis function neural network [20]. Zhang et al. [21] employed neural networks (NN) to perform topology optimization and conducted a comprehensive study of this method. In addition, diffusion models have also been employed to reduce the design variable dimensionality, where transfer learning is utilized to enable generalization to unseen design conditions [22]. More recently, generalizable structural topology generation informed by a few prior designs has also been explored through Fourier and latent modulated neural networks, demonstrating the potential of neural representations for rapid topology synthesis [23]. In parallel, designer-driven topology optimization has also been investigated through human-in-the-loop neural networks, where designer feedback is incorporated into the optimization process [24]. Topology optimization has also been extended to model-and-data-driven robust concurrent optimization, where structural topology and device layout are optimized simultaneously while accounting for load location uncertainty and non-design domain effects [25]. It has further been applied to reliability-based, time-dependent, and multi-scale design problems [26,27,28]. Beyond learning-based density or topology representations, recent studies have also explored replacing the conventional FEM module with physics-informed neural models in topology optimization. For example, Jeong et al. proposed a PINN-based topology optimization framework (PINNTO), in which an energy-based PINN is used to replace finite element analysis for determining the deformation state [29]. They further developed a more advanced PINN-based framework for nonlinear and complex topology optimization [30]. In addition to these PINN-based developments, other studies have also highlighted the growing interest in ML-driven topology optimization, including review and real-time optimization frameworks [31,32,33]. In related fields, meta-learning has also been introduced for rapid prediction in geometry-sensitive engineering problems, such as tunneling-induced surface ground deformation [34]. Moreover, lightweight deep networks have shown strong capability in detecting fine-scale crack patterns [35,36], while automatically designed deep architectures have also been explored for topology-sensitive vessel segmentation [37]. These studies further demonstrate the importance of efficient and expressive neural representations for geometrically complex and slender structural patterns.
Although these studies have demonstrated the potential of machine-learning-based topology optimization, further improvements are still needed in the geometrically faithful representation of fine-scale structural features. Existing neural-network-based approaches mainly emphasize optimization efficiency, dimensionality reduction, or parameterized density representation, but the integration of geometrically informed basis functions with neural representations remains relatively limited. In particular, accurately capturing slender members, high-aspect-ratio components, and sharp structural boundaries in a unified and fully differentiable framework is still a challenging issue. This limitation motivates the development of a geometry-enhanced neural representation that can better incorporate anisotropic geometric priors into topology optimization.
In this context, Anisotropic Radial Basis Functions (ARBFs) provide a promising way to enhance the geometric expressiveness of neural representations. ARBFs have demonstrated effectiveness in capturing complex geometric features and improving computational efficiency in applications such as shape representation [38,39]. For example, an ARBF-based method has been successfully applied to simulate the dynamics of a continental-scale ice sheet, where the ratio between typical thickness and length is extremely small [38]. These unique properties make ARBFs particularly well-suited for topology optimization tasks involving slender structures or high-aspect-ratio features, such as bar-like members. However, their application in topology optimization remains underexplored.
To address this research gap, this paper proposes a novel Geometric-enhanced Neural Network (GeNN) method for topology optimization based on Anisotropic Radial Basis Functions (ARBFs). The proposed method employs ARBFs to represent the density field, which allows for a more compact representation of the design space and improves computational efficiency by reducing the number of design variables. The ARBFs are designed to capture complex geometric features, particularly in cases with small thickness-to-length ratios, such as bar-like members. By embedding ARBFs into the neural network architecture, GeNN greatly enhances its nonlinear representation capacity. The method dynamically adjusts the shape and position parameters of ARBFs, effectively capturing complex topological features while producing sharp structural boundaries. In addition, it inherently suppresses checkerboard patterns without additional filtering techniques. Beyond these capabilities, the method offers the flexibility to adjust geometric parameters, enabling the generation of multiple competitive design solutions. This will allow designers to refine and select the most suitable conceptual design based on their expertise. This work advances the current state of the art by (1) introducing a geometry-enhanced neural representation for improving geometric fidelity and computational performance, enabling a continuous, differentiable, and geometrically informed representation of the density field; (2) demonstrating the scalability of the proposed GeNN framework across diverse applications, including heat conduction problems, multi-volume constraint designs, and large-scale structural optimization, and (3) eliminating the need for filtering or manual gradients through an end-to-end differentiable architecture. These contributions establish GeNN as a unified, scalable, and more interpretable alternative to existing neural topology optimization methods.
The remainder of this paper is organized as follows. Section 2 briefly introduces the theoretical foundation of the anisotropic radial basis function. Section 3 presents the details of the proposed method. Several numerical experiments are shown in Section 4 to evaluate the optimization capabilities. Section 5 demonstrates its scalability across various optimization problems. Section 6 discusses the role and impact of key components within the framework. Finally, Section 7 concludes the paper.

2. Theoretical Foundation of the Anisotropic Radial Basis Function

The Anisotropic Radial Basis Function (ARBF) is an extension of the traditional radial basis function [40] (RBF) that introduces anisotropy to enhance the flexibility and adaptability. Unlike traditional isotropic RBFs, which assume uniform scaling in all directions, ARBFs allow for direction-dependent scaling factors, enabling the model to capture the local characteristics of the data more effectively. This enhanced adaptability makes ARBFs particularly suitable for applications involving directional variability or complex spatial structures. Traditional isotropic RBFs are typically defined as
φ ( x , c i ) = φ ( x c i )
where x is the input point, c i is the center of the i-th basis function, and ‖ ‖ denotes the Euclidean norm. Among the various isotropic basis functions, the Gaussian basis function is the most commonly used. Its mathematical expression is given below:
φ ( x , c i ) = exp ( x c i 2 2 σ i 2 )
where σ i is the bandwidth parameter to control the width or spread of the Gaussian function.
For the anisotropic case, the Gaussian ARBF can be written as
φ ( x , c i , i ) = exp [ 1 2 ( x c i ) T i 1 ( x c i ) ]
Here, i is a symmetric covariance matrix that controls the shape and orientation of the Gaussian function, allowing it to stretch or compress along different axes (see Figure 1).
As shown in Figure 1, Anisotropic Radial Basis Functions (ARBFs) demonstrate exceptional versatility in representing a wide range of geometric shapes by adjusting their covariance matrix Σ. When Σ is set to the identity matrix (Σ = I), the ARBF generates an isotropic, circular shape. By modifying Σ to a diagonal matrix (Σ = diag(2, 0.5)), the ARBF transitions to an elliptical form, capturing anisotropic scaling along different axes. Introducing off-diagonal terms (e.g., Σ = [[1, 0.8], [0.8, 1]]) further enables the ARBF to model rotated elliptical shapes, reflecting directional dependencies. In extreme configurations, such as Σ = diag(1000, 0.1), the ARBF approximates a linear-like structure, effectively capturing features with high aspect ratios. This remarkable adaptability allows ARBFs to model complex geometric and topological features with precision, making them particularly effective for representing density fields at structural boundaries and achieving sharp, well-defined optimized designs [41].

3. Proposed Method

Topology optimization is a powerful design tool that aims to determine the optimal material distribution within a given design domain. To illustrate the proposed method, which is based on the Solid Isotropic Material with Penalization (SIMP) formulation, a design domain with predefined loads and constraints is considered (see Figure 2). The compliance minimization problem seeks to maximize the structural stiffness subject to a prescribed volume fraction constraint, and is mathematically expressed as follows:
min ρ e   U T K U = e = 1 N u e T k e ( E e ( ρ e ) ) u e s u b j e c t   t o   K U = F V ( ρ e ) V 0 = V f c   0 < ρ e 1
where the U , K , and F are the global displacement, stiffness matrix, and force vectors, respectively, u e and k e are the local displacement and stiffness matrix, respectively, ρ e is the design variable representing the element density, V ρ e , V 0 and V f c are the material volume, volume of the design domain, and the prescribed volume fraction, respectively. Young’s modulus E e ( ρ e ) is a function of density, as follows:
E e ρ e = ρ e p E 0
where E 0 denotes Young’s modulus of the material, and the penalization factor p is set as 3 in this study.
In traditional density-based methods, density is defined at each element. In contrast, the proposed method employs a Geometric-Enhanced Neural Network (GeNN) to represent the density field. Unlike traditional sensitivity-based optimization algorithms, such as moving asymptotes (MMA) [42], a globally convergent version of the MMA (GCMMA) [43], and Optimality Criteria (OC), the proposed method does not directly update element densities. Instead, it optimizes the parameters (θ) of the GeNN using optimization algorithms commonly employed in deep learning, such as Adaptive moment estimation (Adam) [44], thereby indirectly controlling the density distribution. This strategy enables a more flexible exploration of the design space and offers the capability to uncover topology structures that are difficult to achieve with traditional methods.

3.1. Geometric-Enhanced Neural Network

The proposed Geometric-enhanced Neural Network (GeNN) addresses the limitations of previous methods by providing a flexible and efficient representation of the density field. By using Anisotropic Radial Basis Functions (ARBFs), GeNN enables precise control over material distribution, particularly in capturing fine-scale geometric features and high-aspect-ratio structures during the topology optimization. Importantly, the introduction of ARBFs is not merely an increase in the parameter degrees of freedom of conventional isotropic RBFs. By incorporating learnable directional scaling and rotation, ARBFs provide a fundamentally different representational mechanism, enabling the density field to capture direction-dependent geometric features, such as slender members and high-aspect-ratio structures, more effectively and offering a more flexible design space for optimization. The architecture and training procedure of the GeNN are described in detail below.
Following the illustration of the GeNN in Figure 3, a detailed explanation of its architecture and operation is provided. The GeNN utilizes a set of m adaptive anisotropic radial basis functions (ARBFs) randomly distributed within the design domain. Crucially, both the location parameters ( c i ) and shape parameters ( i ) of these ARBFs are learnable, enabling the network to dynamically adapt to the optimal density distribution. The location parameters ( c i ) are initialized to uniformly cover the design domain, while the shape parameters ( i ) are initialized as constant values with a prescribed initial width, denoted by σ i n i t . During training, all these parameters are treated as learnable variables and optimized jointly with the network weights via gradient-based backpropagation using the Adam optimizer under the compliance objective and volume constraint.
For any given point x = ( x , y ) , each ARBF computes its response based on the distance between a given point and the center of the basis function, thereby producing the geometric feature vector F consisting of m distinct responses that capture localized geometric information across the design domain.
F = [ φ ( x , c 1 , 1 ) , φ ( x , c 2 , 2 ) , , φ ( x , c m , m ) ]
Essentially, these m responses form the feature vector F that encodes the spatial context necessary for describing material variations. This feature vector is then fed into a multilayer perceptron (MLP) network. The MLP consists of multiple fully connected layers integrated with nonlinear activation functions (e.g., nn.Tanh()), which enable the network to learn complex mappings between the extracted features and the corresponding material density. Finally, a sigmoid activation function is applied to the output of the last hidden layer, and its output values are converted to density values [ 0,1 ] , as follows:
ρ ( x ) = 1 1 + e f θ ( F )
where f θ ( · ) is the MLP network, and θ is the set of learnable parameters. Therefore, the complete set of learnable parameters for the proposed GeNN model includes the location parameters ( c i ) and shape parameters ( i ) of these RBFs, as well as the parameters ( θ ) of the MLP.
In contrast to conventional element-wise density parameterizations, the ARBF-based representation enforces spatial continuity of the density field. Since each density value is generated by the superposition of smooth anisotropic kernels with finite support, the resulting field inherently exhibits spatial correlation. This continuous parameterization restricts element-scale high-frequency oscillations, which are the primary source of checkerboard patterns in classical SIMP formulations. As a result, checkerboard instabilities are naturally suppressed without the need for additional filtering techniques. This behavior is further validated by the numerical experiments presented in Section 4.

3.2. Loss Function

During each training step, GeNN calculates the density value at the center of each element and provides this value to the FEA solver for calculating the element displacement u e . Then, the element strain energy can be calculated by ρ e p u e T K 0 u e , where ρ e p represents the material penalty. Finally, summing the strain energies of all elements yields the total compliance, as shown below:
J = U T K U = e ρ e p u e T K 0 u e
To take full advantage of the nonlinear optimization capability of neural networks, a constrained optimization problem (see Equation (4)) is converted to an unconstrained optimization problem using the Lagrange multiplier method. Its formulation is expressed as follows:
L o s s = J J 0 + λ   V ρ e V 0 * V f c 1 2
where J 0 is the initial objective function when the density values of all elements in the design domain are equal to V f c , and λ is the penalty parameter that balances the compliance objective and the volume constraint. In our implementation, the penalty parameter is progressively increased during training according to a continuation-type schedule ( λ min 100 , λ + 0.5 per iteration.

3.3. Optimization Algorithm

Based on the proposed GeNN method and Equations (6)–(9), the compliance minimization problem has been reformulated into a neural network optimization problem. The complete optimization scheme is outlined as follows:
min :   L o s s find :   w = { c 1 , , c m ;   1 , , m ;   θ } s . t .   K U = F
Comparing Equations (4a) and (10), it can be observed that the design variable in the topology optimization problem has shifted from { ρ 1 , , ρ N } N to { c 1 , , c m ;   1 , , m ;   θ } W . The dimension of the optimization space will be drastically reduced, i.e., W N . For example, in the optimization case shown in Figure 2, the number of optimization variables of the traditional topology optimization method is N = 12,800, and the optimization variables of the proposed GeNN method are W = 2036.
To solve Equation (10), the gradient of the loss function with respect to the optimization variable must first be computed. The mathematical expression for this gradient is given as follows:
L w i = e L ρ e ρ e w i
where the ρ e w i can be determined analytically via the backpropagation algorithm. It should be noted that the overall loss is a composite function consisting of a finite element analysis (FEA) component and a differentiable neural-network component. For the compliance term, the sensitivity with respect to the density field is evaluated using the classical adjoint sensitivity formulation in density-based topology optimization (see, e.g., the 99-line topology optimization code [45]), while the remaining derivatives are obtained through backpropagation. Its complete formulation is presented as follows:
L ρ e = 1 J 0 p ρ e p 1 u e T K u e + 2 λ V 0 V f c ( V ( ρ e ) V 0 V f c 1 )
The proposed method is compatible with any gradient-based optimization algorithm commonly used in deep learning. In this paper, the Adam optimizer is adopted because it generally provides more stable and efficient updates for neural-network-based optimization problems. The convergence criterion is based on the change in the loss function, specifically the standard deviation of the objective function values between successive iterations. Its formulation is as follows:
σ i = 1 N C 1 j = i ( N C 1 ) i ( L j L ¯ i ) 2 L ¯ i = M e a n ( { L i ( N C 1 ) , L i ( N C 2 ) + + L i } )
where N C denotes the number of consecutive iterations used to monitor convergence ( i N C ). If σ i < σ ~ , the optimization algorithm is terminated.
Figure 4 illustrates the optimization flowchart. The procedure begins with the initialization of parameters, including the location parameters ( c i ), shape parameters ( i ), and the network parameters ( θ ). GeNN is then employed to compute the density field ρ , followed by solving the finite element analysis (FEA) to evaluate the structural response. The loss function is computed based on the optimization objectives, and the parameters are updated using the Adam optimization algorithm. The iterative process continues until pre-defined convergence criteria are satisfied, at which point the optimized density field is output. Upon completing the iterative training, GeNN enables real-time generation of high-resolution optimized structures through forward propagation using refined meshes.

4. Numerical Experiments

To investigate the proposed GeNN method, a series of numerical examples are presented. Section 4.1 presents 2D compliance examples to evaluate the optimization capabilities; Section 4.2 explores high-resolution topology optimization design; Section 4.3 provides a comparative analysis between the proposed GeNN and the traditional method to demonstrate the superiority of the proposed method; and Section 4.4 compares GeNN with various RBF- and neural-network-based models. The default parameters in the following implementation are as follows:
1: The design domains of all 2D examples are assumed to be discretized by using a mesh of 80 × 40 square FEs (Q4 elements), the number of ARBFs is set to 20 × 10 , two hidden layers with 10 neurons per layer are employed, unless otherwise stated.
2: The volume fraction, the density penalization power, Young’s modulus, and Poisson’s ratio are set as 0.4, 0.3, 1.0, and 0.3, respectively.
3: The network parameters are updated using the Adam optimizer with a learning rate set to 0.1. The momentum parameters are configured with β 1 = 0.9   and β 2 = 0.999 , respectively.
4: Convergence is evaluated over 10 consecutive iterations, i.e., NC = 10. The convergence tolerance values are set to 0.0001 for the 2D examples and 0.001 for the 3D examples, respectively.
5: All examples are implemented using Python 3.10.16 and PyTorch 2.5.1.
6: For the SIMP baseline, a standard density filter is employed to suppress checkerboard patterns and mesh dependency. The filter radius is set to 1.3 elements for the 2D examples and 1.5 elements for the 3D example.

4.1. 2D Compliance Minimization Examples

Figure 5 illustrates the design domains and boundary conditions for both the cantilever beam and Michell beam. For the cantilever beam, the left edge is fully constrained, while a unit concentrated load is applied at the bottom-right corner. In the Michell beam problem, both the bottom-left and bottom-right corners are fully fixed, with a concentrated force applied at the midpoint of the bottom edge.
Figure 6 shows the optimized results of the cantilever and Michell beams throughout the iterative process, where nearly stable results are achieved after 60 iterations. Meanwhile, the results indicate that the proposed method can achieve nearly binary (0–1) topology optimization and avoids the checkerboard phenomenon without relying on filtering techniques commonly employed in traditional SIMP topology optimization. To further quantify checkerboard suppression, a 2 × 2 block-based checkerboard index I c b is introduced to measure element-scale alternating density patterns. Figure 7 compares the proposed GeNN with the unfiltered SIMP baseline for the cantilever beam and Michell beam. For the cantilever beam, I c b decreases from 0.1251 to 0.0434, corresponding to a reduction of approximately 65.3%. For the Michell beam, I c b decreases from 0.0724 to 0.0524, corresponding to a reduction of approximately 27.6%. These quantitative results further support that the ARBF-based continuous parameterization in the proposed framework effectively suppresses checkerboard artifacts without additional filtering.
The convergence curves of the Michell beam, including the loss function and volume fraction, are illustrated in Figure 8. The results indicate that the optimization process stabilizes after approximately 60 iterations, with only minor adjustments observed at the structural boundaries.
To further illustrate the ability of the proposed method to handle complex geometries, the well-known L-bracket benchmark problem is employed, where the top end of the L-bracket is fully constrained, the external force is distributed over 8 nodes to avoid singularities and ensure numerical stability, as shown in Figure 9a. The design domain is discretized by using a mesh of 140 × 140 square FEs (Q4 elements) with the number of ARBFs set to   20 × 20 . The volume fraction is set to 0.23. Figure 9b presents the optimized result, demonstrating that the proposed method can achieve clear boundaries while being capable of handling complex geometric problems. The distribution of ARBFs in Figure 9c further demonstrates the method’s ability to dynamically adjust shape and position parameters, enabling precise modeling of complex density variations while ensuring the clarity of structural boundaries.

4.2. Real-Time High-Resolution Inference-Based Design

After completing the training process, the GeNN utilizes its pre-trained neural network to generate high-resolution optimization results in real time by forward propagation. Figure 10 illustrates finer resolution boundaries of the optimized topologies, which were obtained by sampling the density function at a resolution 4× higher using the pre-trained GeNN. This process enables near real-time high-resolution inference after training, with the high-resolution forward pass completed in less than 0.02 s, while the total runtime, including training, is approximately 30 s.

4.3. Comparison with the Traditional Method

In this section, a simply supported (SS) beam is used to compare the performance of the proposed GeNN framework with the traditional SIMP method. The GeNN is then extended to the 3D compliance minimization problem to evaluate its effectiveness in handling more complex optimization scenarios with improved precision and efficiency.
For the SS beam, the bottom-left corner is completely fixed, while vertical displacements are restrained along the lower 3/5 of its length, and a uniform unit load is applied along the top edge. The design domain is assumed to be discretized by using a mesh of 160 × 80 square FEs (Q4 elements). Figure 11 presents the comparison results between the SIMP and GeNN methods. The results demonstrate that the proposed GeNN framework produces clearer optimization results with more fine-scale structural features. Meanwhile, Table 1 gives a summary of the SS beam between SIMP and GeNN. These results show that the topology result obtained using the proposed method gives better accuracy with a significantly reduced number of iterations compared to the SIMP method. Additionally, the computation time is reduced from 61.36 s to 32.55 s.
In the 3D example, this extension is accomplished by changing the input coordinates from ( x ,   y ) to   ( x ,   y ,   z ) , coupled with the implementation of a 3D FEA solver. The finite element analysis is performed using a standard linear 8-node hexahedral displacement-based element with 24 degrees of freedom per element. The element stiffness matrix is derived under the linear elasticity assumption and assembled into a global sparse stiffness matrix. Density-dependent stiffness interpolation is implemented using the SIMP scheme. The resulting linear system is solved using a sparse direct solver (PARDISO). This implementation ensures numerical stability and scalability for large-scale 3D problems.
A 3D cantilever beam, discretized by an 80 × 40 × 4 hexahedral mesh, is employed to demonstrate the generalization of the proposed GeNN method, as shown in Figure 12. Figure 13 illustrates that the optimized result obtained by the proposed method has clearer and smoother boundaries than the SIMP method. Furthermore, the proposed GeNN method exhibits superior computational efficiency, converging in fewer iterations while simultaneously achieving a lower compliance value than the traditional SIMP-3D approach, as summarized in Table 2.

4.4. Comparison with Network-Based Methods

In contrast to prior RBF-or neural-network-based methods that rely on isotropic basis functions or static feature encodings, the proposed GeNN approach introduces anisotropic directional scaling, enabling more accurate representation of fine-scale structural features with high aspect ratios. To this end, the cantilever and Michell beam problems shown in Figure 5 are used to demonstrate the advantages of the proposed method. The compared models include TDF-NN [7], RBFNN [20], and TOuNN [33]. To ensure that the comparison reflects the intrinsic parameterization capability of each network-based method, the Heaviside projection was not applied to the RBFNN baseline in this study. Figure 14 shows that the proposed method has clear boundaries and topological features compared to the other network-based methods. To provide a scale-consistent comparison, we additionally normalize the total wall-clock time by the number of degrees of freedom (DOFs). Since all compared methods are evaluated on the same 2D benchmark discretization ( 80 × 40 Q4 elements, corresponding to 6642 DOFs), the time per DOF provides a fairer measure of computational efficiency. The normalized results further confirm the efficiency advantage of the proposed GeNN. For the cantilever beam case, the time per DOF is 9.92 × 10 4 s/DOF for GeNN, compared with 1.89 × 10 3 , 2.67 × 10 3 and 5.88 × 10 3 s/DOF for TDF-NN, RBFNN, and TOuNN, respectively. For the Michell beam case, the corresponding values are 8.22 × 10 4 , 1.90 × 10 3 , 2.59 × 10 3 , and 6.52 × 10 3 s/DOF, respectively.
Figure 15a illustrates the superior computational efficiency of the proposed GeNN method across both benchmark cases. For the cantilever beam problem, GeNN achieves convergence in 6.59 s, demonstrating a 47.5% reduction in computation time compared to TDF-NN (12.54 s), a 62.9% reduction compared to RBFNN (17.75 s), and an 83.1% reduction compared to TouNN (39.07 s). Similarly, for the Michell beam problem, GeNN completes optimization in 5.46 s, outperforming TDF-NN (12.61 s), RBFNN (17.20 s), and TouNN (43.34 s) with reductions of 56.7%, 68.3%, and 87.4%, respectively. In addition, Figure 15b further highlights that GeNN not only achieves faster convergence but also obtains competitive or superior objective values, underscoring its effectiveness in balancing computational efficiency and optimization quality.

5. Scalability of the Proposed Method

The scalability of the proposed GeNN method is validated across a wide range of optimization scenarios, including heat conduction problems, multi-volume constraint designs, and large-scale structural optimization tasks.

5.1. Generalization to Heat Conduction

This section evaluates the generalizability of the proposed method using a benchmark heat conduction problem. In the optimization of thermal performance, the problem involves a square plate subjected to distributed heating across its entire surface, except for a heat sink located at the center of the lower edge, as illustrated in Figure 16a. The square plate is discretized by using a 100 × 100 square finite element mesh. The formulation of the heat conduction problem is expressed as follows:
min ρ e h   P T T = e = 1 N t e T k e h ( E e h ( ρ e h ) ) t e
s u b j e c t   t o   K h T = P V ( ρ e h ) V 0 = V f h   0 < ρ e h 1
here, the T , K h and P denote the global temperature vector, heat conductivity matrix, and heat load vector, respectively. Meanwhile, t e , k e h and ρ e h represent the local temperature vector, heat conductivity matrix, and the design variable (the element density).
Figure 16b,c show the optimization results from the proposed GeNN method and the SIMP method, respectively. The comparison highlights that the GeNN framework not only achieves clearer structural boundaries and a 19.7% reduction in the objective function value compared to SIMP but also captures more detailed topological features.

5.2. Application Scalability to Multi-Volume Constraint Design

Since the proposed GeNN method is based on the neural network framework, it can be easily extended to handle multi-volume constraint design problems. The key idea is to modify the loss function to incorporate multiple volume constraints, allowing the network to learn and optimize for these constraints simultaneously. To extend the proposed GeNN method to the multi-volume constraint design, the loss function can be modified to include multiple volume constraints. The modified loss function can be expressed as
L o s s = J J 0 + i β i M a s k ρ ρ ~ i 2
where β i is a weighting factor for each volume constraint, Mask [·] is a masking function that selectively applies the constraint to specific regions of the design space, and ρ ~ i is the target density for the i-th volume constraint.
Figure 17a,b shows the design domain and multi-volume constraint of a 2D bridge structure. The design domain is a 2D rectangular area with a length of 160 m and a height of 40 m. Two scenarios are tested: the first involves a single volume constraint with a volume fraction of 0.23, while the second case involves multiple volume constraints, where the first volume constraint is set to 0.20, the second volume constraint is set to 0.23, and the third volume constraint is set to 0.20. The target weighting factors for the volume constraints are set equally, starting from 1 and increasing by 0.5 incrementally until reaching 100. The design domain is discretized into a mesh of 160 × 40 elements, with 80 × 20 anisotropic radial basis functions (ARBFs). Figure 17c,d presents the optimization results for the 2D bridge structure under both single and multi-volume constraint scenarios. After optimization, the volume fractions of the subdomains are 0.1998, 0.2301, and 0.1998, precisely matching the target values. This demonstrates the effectiveness of the GeNN method in accurately handling multi-volume constraints. In addition, the results show that the GeNN method produces well-defined and clear structural boundaries, even under multiple volume constraints.

5.3. Computational Scalability: Million-Dof Bridge Optimization

To further demonstrate the computational scalability and practical potential of the proposed method in resource-constrained engineering applications, this section applies the proposed GeNN to three-dimensional structural optimization of long-span bridges. In the long-span bridge, the traditional design model is highly dependent on experience and intuition, which limits the innovation of bridge systems and forms. Topology optimization provides a way for form finding to automatically find the optimal structural form under certain load and boundary conditions. Here, a long-span bridge with a total length of 384 m (as shown in Figure 18) will be studied under simple support and uniform load, with the design domain discretized using a mesh of 384 × 48 × 24 square finite elements. The material properties, including Young’s modulus of 1.0, Poisson’s ratio of 0.3, and a volume fraction of 0.2, are defined, and the uniform load applied to the bridge deck is set to 1.0. In addition to the prescribed supports and loads, two engineering-related geometric constraints were imposed in the bridge example, namely a deck-width-related non-design region and a clearance region beneath the deck, represented by fixed solid and void subdomains, respectively. As highlighted by the yellow regions in Figure 18, these constraints were implemented as fixed solid and fixed void subdomains, respectively.
Given the structural symmetry, the half structure with 192 × 48 × 24 square finite elements is chosen for the optimization. Consequently, the optimization model contains approximately 700k degrees of freedom, while the full bridge model corresponds to more than one million degrees of freedom. During training, the peak GPU memory reserved during training was approximately 21.3 GB, as reported by PyTorch’s CUDA memory statistics. In this example, the maximum number of optimization iterations is set to 100, which is sufficient for the optimization process to reach a stable structural configuration.
Figure 19 presents the optimized 3D bridge structure, which was obtained within ten minutes, demonstrating that the proposed GeNN method is well-suited for stiffness-driven conceptual design in large-scale, resource-constrained civil engineering problems involving millions of degrees of freedom.

6. Discussion

In this study, we explore the integration of Anisotropic Radial Basis Functions (ARBFs) within a neural network framework and propose a novel Geometric-Enhanced Neural Network (GeNN) for topology optimization. Numerical experiments demonstrate that the GeNN method delivers superior optimization performance and exhibits strong scalability. However, further analysis is required to evaluate the impact of key factors, including the number of ARBFs, the neural network architecture, and the results of ablation studies on ARBFs, to better understand their influence on optimization outcomes and refine the proposed methodology.

6.1. Impact of the Number of ARBFs

ARBFs serve as critical geometric feature extractors within the GeNN framework. However, determining the minimum number of ARBFs remains a challenging task. Intuitively, a higher number of ARBFs may capture more topological details. To investigate this, a series of experiments was conducted to illustrate the effect of the number of ARBFs on the optimized structures. Figure 20 shows the optimized results by varying the number of ARBFs while keeping all other parameters at their default values.
It is worth noting that, although the number of ARBFs influences the geometric richness of the optimized topology, the corresponding objective values remain very close. In the present examples, the maximum relative difference among the tested configurations is approximately 1.3%, indicating that the proposed method can generate multiple competitive structural designs with comparable performance, offering diverse high-quality candidates for engineering selection. Based on extensive numerical experiments, a practical heuristic is to select the number of ARBFs to be approximately 50% of the number of finite elements along each principal design direction.

6.2. Effect of the Network Architecture and Hyperparameters

Table 3 presents the four network architectures investigated for the proposed GeNN while keeping the number of ARBFs constant. In Configuration A, the network comprises two hidden layers with five neurons per layer. Configuration B consists of two hidden layers with ten neurons per layer. Configuration C contains two hidden layers with twenty neurons per layer, and Configuration D features three hidden layers with ten neurons per layer. In this experiment, the previously described cantilever beam is used.
Notably, the proposed method with all network configurations achieved convergence. The corresponding optimization results are given in Figure 21, where it can be seen that clearly optimized structures can be obtained across all given network architectures. Figure 22 further compares the results for the cantilever beam under varying network configurations, highlighting the robustness of the method. Among these, configuration B (the default network configuration in this study) achieved the best-optimized structure with the shortest computational time.
To further examine the influence of the penalty parameter λ, a sensitivity analysis was conducted by varying its initial value (λ0 = 1, 5, 10) across representative benchmark problems. The results (Figure 23 and Table 4) show that although early-stage oscillations depend on λ0, the final compliance values and topologies remain consistent, demonstrating robust convergence behavior.

6.3. Ablation Study of ARBFs

To investigate the functionality of ARBFs, this study performed an ablation study using the network architecture outlined in Table 5. In this experiment, the number of neurons in the first hidden layer of the architecture without ARBFs is set equal to the number of ARBFs used in the GeNN. As shown in Figure 24, the absence of ARBFs prevented the network from achieving correct topology optimization results. Adding batch normalization (BN) layers can improve the stability of network training and facilitate the convergence of topology optimization results; however, this approach may still lead to a loss of fine topological details. These findings clearly demonstrate that ARBFs significantly enhance the geometric representation of the neural network.

6.4. Sensitivity of Optimization to ARBF Initialization and Random Seed

To further assess the robustness of the proposed GeNN framework, additional sensitivity studies were conducted for the Cantilever and Michell Beam benchmarks under the default 80 × 40 mesh and 20 × 10 ARBF setting. The analysis considered three aspects: the initial ARBF covariance, the initialization strategy of ARBF centers, and the optimization stability under different random seeds.
As summarized in Table 6, sensitivity analyses were conducted on the Cantilever beam and Michell Beam examples from three aspects, namely covariance-related kernel-width initialization ( σ i n i t = 0.8, 1.1, 1.5), ARBF center initialization (uniform grid versus perturbed grid within ±0.2 h), and random-seed stability ({1, 999, 2025}). Only minor variations are observed in the final objective values and iteration counts, demonstrating that the proposed GeNN framework is insensitive to reasonable initialization changes and exhibits satisfactory robustness under stochastic optimization.

7. Conclusions

This article introduces a novel Geometric-enhanced Neural Network (GeNN) for topology optimization. By integrating anisotropic radial basis functions (ARBFs) into a neural network framework, GeNN provides a continuous and geometrically informed representation of material densities. Based on the numerical examples considered in this study, the proposed method demonstrates improved computational efficiency and competitive optimization performance compared with conventional topology optimization methods as well as representative RBF- and neural-network-based approaches. In addition, it effectively suppresses checkerboard patterns without employing filtering techniques, indicating its potential for large-scale and resource-constrained civil engineering applications.
The main conclusions are summarized as follows.
1. The proposed GeNN method utilizes ARBFs with directional scaling factors to capture the geometric features. The input of the GeNN method is the coordinate x and the output is the density field ρ . The learnable parameters include the position and shape parameters of ARBFs, as well as the weights and biases of the MLP, which are optimized through the backpropagation algorithm. Ablation studies further validate that ARBFs significantly enhance the geometric representation.
2. Compared to the traditional SIMP method and other RBF- or neural network-based methods, the proposed GeNN method achieves superior optimization results in significantly less time. The optimization results have clearer boundaries, i.e., the optimization results have fewer gray elements. Additionally, GeNN employs adaptive ARBFs to effectively reduce the number of design variables, further enhancing computational efficiency.
3. The proposed GeNN method effectively eliminates numerical instabilities without relying on filtering techniques. Furthermore, by tuning the network parameters, multiple competitive and distinct optimization results can be generated, enabling designers to select an appropriate optimization structure for a specific design task.
4. The proposed GeNN framework demonstrates good adaptability across the computational examples considered in this study, including compliance minimization, heat conduction, multi-volume design, and a large-scale 3D bridge example. Moreover, the successful optimization of the 3D bridge model with more than one million degrees of freedom indicates the computational scalability of the framework for large-scale topology optimization problems.
However, several limitations of the present study should be acknowledged. The representation capacity of GeNN is inherently governed by the number and scale of ARBF kernels, leading to a trade-off between fine-feature expressiveness and implicit smoothness or regularization. An insufficient number of kernels may restrict geometric resolution, whereas overly localized kernels may increase sensitivity to initialization and hyperparameter selection. Moreover, although the ARBF parameterization mitigates checkerboard artifacts without explicit filtering techniques, the final discreteness of the optimized topology (i.e., the presence of gray elements) may still depend on kernel width initialization, penalization parameters, and training configurations.
Future work will focus on adaptive parameter tuning to improve robustness, extending GeNN to multi-volume and seismic-loading scenarios, and integrating it with multi-resolution strategies, more comprehensive engineering constraints, and complementary learning frameworks to enhance scalability and practical applicability in civil engineering.

Author Contributions

Conceptualization, L.Z., S.L., Z.L., X.Z., G.C., and W.Q.; methodology, L.Z., S.L., Z.L., X.Z., G.C., and W.Q.; software, L.Z., S.L., X.Z.; resources, L.Z.; writing—original draft, L.Z.; writing—review and editing, G.D. and X.Z.; investigation, Z.L., X.Z., G.C., and W.Q.; formal analysis, G.D.; validation, S.L.; supervision, G.D., X.Z., and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

Authors Lei Zhang, Shiqiang Li, and Zhichu Lei were employed by Zhejiang Zhoushan Ocean Power Transmission Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
x, y, zSpatial coordinates/inputs to the neural representation.
ρ Density field or material density variable.
θ Trainable parameters of the GeNN model.
c i Center parameter of the i-th ARBF kernel.
i Shape parameter of the i-th ARBF kernel.
φ (.)Anisotropic radial basis function kernel.
UGlobal displacement vector.
FExternal load vector.
KGlobal stiffness matrix.
V ρ e , V 0 , V f c Material volume, volume of the design domain and the prescribed volume fraction.
EEffective Young’s modulus of the material.
pPenalization factor in the material interpolation scheme.
λ Penalty parameter used in the optimization framework.

References

  1. Bendsøe, M.P. Optimal shape design as a material distribution problem. Struct. Optim. 1989, 1, 193–202. [Google Scholar] [CrossRef]
  2. Bendsøe, M.P.; Sigmund, O. Material interpolation schemes in topology optimization. Arch. Appl. Mech. 1999, 69, 635–654. [Google Scholar] [CrossRef]
  3. Yulin, M.; Xiaoming, W. A level set method for structural topology optimization and its applications. Adv. Eng. Softw. 2004, 35, 415–441. [Google Scholar] [CrossRef]
  4. Xie, Y.M.; Steven, G.P. A simple evolutionary procedure for structural optimization. Comput. Struct. 1993, 49, 885–896. [Google Scholar] [CrossRef]
  5. Aage, N.; Andreassen, E.; Lazarov, B.S.; Sigmund, O. Giga-voxel computational morphogenesis for structural design. Nature 2017, 550, 84–86. [Google Scholar] [CrossRef]
  6. Baandrup, M.; Sigmund, O.; Polk, H.; Aage, N. Closing the gap towards super-long suspension bridges using computational morphogenesis. Nat. Commun. 2020, 11, 2735. [Google Scholar] [CrossRef]
  7. Qian, W.; Xu, Y.; Li, H. A topology description function-enhanced neural network for topology optimization. Comput. Aided Civ. Eng 2023, 38, 1020–1040. [Google Scholar] [CrossRef]
  8. Li, Y.; Lai, Y.; Lu, G.; Yan, F.; Wei, P.; Xie, Y.M. Innovative design of long-span steel–concrete composite bridge using multi-material topology optimization. Eng. Struct. 2022, 269, 114838. [Google Scholar] [CrossRef]
  9. Amir, O.; Shakour, E. Simultaneous shape and topology optimization of prestressed concrete beams. Struct. Multidiscip. Optim. 2017, 57, 1831–1843. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Kang, Z. Automate topology optimization based on force density method for form-finding of branching structures. J. Build. Eng. 2025, 111, 113254. [Google Scholar] [CrossRef]
  11. Xie, L.; Yang, Z.; Xue, S.; Gong, L.; Tang, H. Topology optimization analysis of a frame-core tube structure using a cable-bracing-self-balanced inerter system. J. Build. Eng. 2024, 95, 110210. [Google Scholar] [CrossRef]
  12. Torstenfelt, B.; Klarbring, A. Conceptual optimal design of modular car product families using simultaneous size, shape and topology optimization. Finite Elem. Anal. Des. 2007, 43, 1050–1061. [Google Scholar] [CrossRef]
  13. Sigmund, O.; Petersson, J. Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima. Struct. Optim. 1998, 16, 68–75. [Google Scholar] [CrossRef]
  14. Sigmund, O.; Maute, K. Sensitivity filtering from a continuum mechanics perspective. Struct. Multidisc. Optim. 2012, 46, 471–475. [Google Scholar] [CrossRef]
  15. Lazarov, B.S.; Sigmund, O. Filters in topology optimization based on Helmholtz-type differential equations. Int. J. Numer. Methods Eng. 2011, 86, 765–781. [Google Scholar] [CrossRef]
  16. van Dijk, N.P.; Maute, K.; Langelaar, M.; van Keulen, F. Level-set methods for structural topology optimization: A review. Struct. Multidisc. Optim. 2013, 48, 437–472. [Google Scholar] [CrossRef]
  17. Hamza, K.; Aly, M.; Hegazi, H. An explicit level-set approach for structural topology optimization. In Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Portland, OR, USA, 4–7 August 2013. [Google Scholar]
  18. Yu, Y.; Hur, T.; Jung, J.; Jang, I.G. Deep learning for determining a near-optimal topological design without any iteration. Struct. Multidiscip. Optim. 2019, 59, 787–799. [Google Scholar] [CrossRef]
  19. Kallioras, N.A.; Kazakis, G.; Lagaros, N.D. Accelerated topology optimization by means of deep learning. Struct. Multidiscip. Optim. 2020, 62, 1185–1212. [Google Scholar] [CrossRef]
  20. Li, K.; Ye, W. Improving efficiency in structural optimization using RBFNN and MMA-Adam hybrid method. Adv. Eng. Inform. 2024, 62, 102869. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Li, Y.; Zhou, W.; Chen, X.; Yao, W.; Zhao, Y. TONR: An exploration for a novel way combining neural network with topology optimization. Comput. Methods Appl. Mech. Eng. 2021, 386, 114083. [Google Scholar] [CrossRef]
  22. Zhang, W.; Zhao, G.; Su, L. Research on multi-stage topology optimization method based on latent diffusion model. Adv. Eng. Inform. 2025, 63, 102966. [Google Scholar] [CrossRef]
  23. Qian, W.; Li, H. Real-time generalizable structural topology generation via fourier and latent modulated neural networks informed by few prior designs. Eng. Struct. 2026, 348, 121700. [Google Scholar] [CrossRef]
  24. Tian, B.; Qian, W.; Yang, Y. Designer preference-driven topology optimization using a human-in-the-loop neural network. Comput. Struct. 2026, 321, 108070. [Google Scholar] [CrossRef]
  25. Li, Z.; Wang, L.; Li, F. Model-and-data-driven robust concurrent optimization for structural topology and device layout integrating load location and non-design domain effects. Aerosp. Sci. Technol. 2025, 170, 111566. [Google Scholar] [CrossRef]
  26. Zhao, X.; Wang, L. Double-scale time-dependent reliable topology optimization based on the first-passage failure and interval process theories. Comput. Methods Appl. Mech. Eng. 2025, 443, 118088. [Google Scholar] [CrossRef]
  27. Li, Z.; Wang, L.; Chai, Y.; Zhang, L.; Liu, Y. Topology design of multi-scale thermo-elastic structures considering performance reduction and disturbance based on interval models. Appl. Math. Model. 2025, 146, 116167. [Google Scholar] [CrossRef]
  28. Li, Z.; Wang, L.; Gu, K. Efficient reliability-based concurrent topology optimization method under PID-driven sequential decoupling framework. Thin-Walled Struct. 2024, 203, 112117. [Google Scholar] [CrossRef]
  29. Jeong, H.; Bai, J.; Batuwatta-Gamage, C.P.; Rathnayaka, C.; Zhou, Y.; Gu, Y.T. A physics-informed neural network-based topology optimization (PINNTO) framework for structural optimization. Eng. Struct. 2023, 278, 115484. [Google Scholar] [CrossRef]
  30. Jeong, H.; Batuwatta-Gamage, C.; Bai, J.; Xie, Y.M.; Rathnayaka, C.; Zhou, Y.; Gu, Y.T. A complete physics-informed neural network-based framework for structural topology optimization. Comput. Methods Appl. Mech. Eng. 2023, 417, 116401. [Google Scholar] [CrossRef]
  31. Shin, S.; Shin, D.; Kang, N. Topology optimization via machine learning and deep learning: A review. arXiv 2022, arXiv:2210.10782. [Google Scholar] [CrossRef]
  32. Yan, J.; Zhang, Q.; Xu, Q.; Fan, Z.; Li, H.; Sun, W.; Wang, G. Deep learning driven real time topology optimisation based on initial stress learning. Adv. Eng. Inform. 2022, 51, 101472. [Google Scholar] [CrossRef]
  33. Chandrasekhar, A.; Suresh, K. TOuNN: Topology Optimization using Neural Networks. Struct. Multidiscip. Optim. 2021, 63, 1135–1149. [Google Scholar] [CrossRef]
  34. He, W.; Chen, G.; Qian, W.; Chen, W.L.; Tang, L.; Kong, X. Model-Agnostic Meta-Learning in Predicting Tunneling-Induced Surface Ground Deformation. Symmetry 2025, 17, 1220. [Google Scholar] [CrossRef]
  35. Zhu, G.; Shen, S.L.; Yao, J.; Wang, M.; Zhuang, J.; Fan, Z. Automatic lightweight networks for real-time road crack detection with DPSO. Adv. Eng. Inform. 2025, 68, 103610. [Google Scholar] [CrossRef]
  36. Zhu, G.; Liu, J.; Fan, Z.; Yuan, D.; Ma, P.; Wang, M.; Sheng, W.; Wang, K.C. A lightweight encoder–decoder network for automatic pavement crack detection. Comput. Aided Civ. Infrastruct. Eng. 2024, 39, 1743–1765. [Google Scholar] [CrossRef]
  37. Wei, J.; Zhu, G.; Fan, Z.; Liu, J.; Rong, Y.; Mo, J.; Li, W.; Chen, X. Genetic U-Net: Automatically designed deep networks for retinal vessel segmentation using a genetic algorithm. IEEE Trans. Med. Imaging 2021, 41, 292–307. [Google Scholar] [CrossRef] [PubMed]
  38. Cheng, G.; Shcherbakov, V. Anisotropic radial basis function methods for continental size ice sheet simulations. J. Comput. Phys. 2018, 372, 161–177. [Google Scholar] [CrossRef]
  39. Dinh, H.Q.; Turk, G.; Slabaugh, G. Reconstructing surfaces using anisotropic basis functions. In Proceedings of the Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; pp. 606–613. Available online: https://ieeexplore.ieee.org/abstract/document/937682/ (accessed on 18 April 2025).
  40. Buhmann, M.D. Radial basis functions. Acta Numer. 2000, 9, 1–38. [Google Scholar] [CrossRef]
  41. Casciola, G.; Montefusco, L.B.; Morigi, S. Edge-driven Image Interpolation using Adaptive Anisotropic Radial Basis Functions. J. Math. Imaging Vis. 2010, 36, 125–139. [Google Scholar] [CrossRef]
  42. Svanberg, K. The method of moving asymptotes—a new method for structural optimization. Numer. Meth Eng. 1987, 24, 359–373. [Google Scholar] [CrossRef]
  43. Svanberg, K. A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations. SIAM J. Optim. 2002, 12, 555–573. [Google Scholar] [CrossRef]
  44. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017. [Google Scholar] [CrossRef]
  45. Sigmund, O. A 99 line topology optimization code written in Matlab. Struct. Multidiscip. Optim. 2001, 21, 120–127. [Google Scholar] [CrossRef]
Figure 1. Versatility of Anisotropic Radial Basis Functions (ARBFs) in representing diverse geometric shapes by adjusting the covariance matrix.
Figure 1. Versatility of Anisotropic Radial Basis Functions (ARBFs) in representing diverse geometric shapes by adjusting the covariance matrix.
Symmetry 18 00537 g001
Figure 2. Design domain and corresponding topology result of the Michell beam. Here, L denotes the span length, L/2 denotes half of the span height, and F represents the applied concentrated load.
Figure 2. Design domain and corresponding topology result of the Michell beam. Here, L denotes the span length, L/2 denotes half of the span height, and F represents the applied concentrated load.
Symmetry 18 00537 g002
Figure 3. Overall schematic of the proposed GeNN method.MLP denotes a Multi-Layer Perceptron (neural network), and FEA denotes Finite Element Analysis.
Figure 3. Overall schematic of the proposed GeNN method.MLP denotes a Multi-Layer Perceptron (neural network), and FEA denotes Finite Element Analysis.
Symmetry 18 00537 g003
Figure 4. Optimization flowchart of the proposed GeNN method.
Figure 4. Optimization flowchart of the proposed GeNN method.
Symmetry 18 00537 g004
Figure 5. Design domain of 2D compliance minimization examples.
Figure 5. Design domain of 2D compliance minimization examples.
Symmetry 18 00537 g005
Figure 6. Optimized configurations by using the proposed GeNN for Cantilever and Michell beams.
Figure 6. Optimized configurations by using the proposed GeNN for Cantilever and Michell beams.
Symmetry 18 00537 g006
Figure 7. Quantitative comparison of checkerboard artifacts between unfiltered SIMP and GeNN for the cantilever beam and Michell beam.
Figure 7. Quantitative comparison of checkerboard artifacts between unfiltered SIMP and GeNN for the cantilever beam and Michell beam.
Symmetry 18 00537 g007
Figure 8. Convergence curve of the proposed GeNN for topology optimization of the Michell beam in Figure 5b.
Figure 8. Convergence curve of the proposed GeNN for topology optimization of the Michell beam in Figure 5b.
Symmetry 18 00537 g008
Figure 9. Illustration of the design domain (a), the optimized topology (b), and the spatial distribution and anisotropic shapes (c) distribution of ARBFs for the L-bracket problem.
Figure 9. Illustration of the design domain (a), the optimized topology (b), and the spatial distribution and anisotropic shapes (c) distribution of ARBFs for the L-bracket problem.
Symmetry 18 00537 g009
Figure 10. Higher resolution (4×) results of the proposed GeNN method.
Figure 10. Higher resolution (4×) results of the proposed GeNN method.
Symmetry 18 00537 g010
Figure 11. The design domain and Optimization results of the simply supported beam.
Figure 11. The design domain and Optimization results of the simply supported beam.
Symmetry 18 00537 g011
Figure 12. Design domain and boundary of the 3D cantilever problem.
Figure 12. Design domain and boundary of the 3D cantilever problem.
Symmetry 18 00537 g012
Figure 13. Optimized 3D topology results obtained using the SIMP and proposed GeNN methods.
Figure 13. Optimized 3D topology results obtained using the SIMP and proposed GeNN methods.
Symmetry 18 00537 g013
Figure 14. Comparative topology optimization results of Cantilever and Michell beams.
Figure 14. Comparative topology optimization results of Cantilever and Michell beams.
Symmetry 18 00537 g014
Figure 15. Computational time and objective value comparison of different optimization methods.
Figure 15. Computational time and objective value comparison of different optimization methods.
Symmetry 18 00537 g015
Figure 16. Design domain of the heat conduction and optimization results obtained by the GeNN and SIMP methods.
Figure 16. Design domain of the heat conduction and optimization results obtained by the GeNN and SIMP methods.
Symmetry 18 00537 g016
Figure 17. Design domain and multi-volume constraint of 2D bridge.The arrows represent uniformly distributed unit loads.
Figure 17. Design domain and multi-volume constraint of 2D bridge.The arrows represent uniformly distributed unit loads.
Symmetry 18 00537 g017
Figure 18. Design domains and boundary conditions of a long-span bridge. The arrows indicate uniformly distributed unit loads applied on the structure.
Figure 18. Design domains and boundary conditions of a long-span bridge. The arrows indicate uniformly distributed unit loads applied on the structure.
Symmetry 18 00537 g018
Figure 19. Optimization result of a long-span bridge by using the proposed GeNN method.
Figure 19. Optimization result of a long-span bridge by using the proposed GeNN method.
Symmetry 18 00537 g019
Figure 20. Optimized topologies of the cantilever beam obtained by varying the number of adaptive ARBFs. (a) 20 × 10 ARBFs, (b) 30 × 15 ARBFs, (c) 40 × 20   ARBFs, and (d) 60 × 30 ARBFs.
Figure 20. Optimized topologies of the cantilever beam obtained by varying the number of adaptive ARBFs. (a) 20 × 10 ARBFs, (b) 30 × 15 ARBFs, (c) 40 × 20   ARBFs, and (d) 60 × 30 ARBFs.
Symmetry 18 00537 g020
Figure 21. Optimized results of the cantilever beam obtained by varying the network architecture. (a) Configuration A, (b) Configuration B, (c) Configuration C, and (d) Configuration D.
Figure 21. Optimized results of the cantilever beam obtained by varying the network architecture. (a) Configuration A, (b) Configuration B, (c) Configuration C, and (d) Configuration D.
Symmetry 18 00537 g021
Figure 22. Summary of the cantilever beam obtained by varying the network architecture.
Figure 22. Summary of the cantilever beam obtained by varying the network architecture.
Symmetry 18 00537 g022
Figure 23. Influence of the initial penalty parameter (λ0) on volume-fraction convergence behavior for representative benchmark problems.
Figure 23. Influence of the initial penalty parameter (λ0) on volume-fraction convergence behavior for representative benchmark problems.
Symmetry 18 00537 g023
Figure 24. Optimized topologies of the ablation study. (a) with ARBFs, (b) No ARBFs without tanh activation function, (c) No ARBFs with tanh activation function.
Figure 24. Optimized topologies of the ablation study. (a) with ARBFs, (b) No ARBFs without tanh activation function, (c) No ARBFs with tanh activation function.
Symmetry 18 00537 g024
Table 1. Summary of the simply supported (SS) beam using the SIMP and GeNN methods. Bold values indicate better performance.
Table 1. Summary of the simply supported (SS) beam using the SIMP and GeNN methods. Bold values indicate better performance.
CaseMethodsObj. Func. ValueIterationNo. of Design VariablesTime (s)
SS BeamSIMP115,819.9540012,80061.36
GeNN112,737.99236123132.55
Table 2. Summary of the 3D experiments using the SIMP and GeNN methods. Bold values indicate better performance.
Table 2. Summary of the 3D experiments using the SIMP and GeNN methods. Bold values indicate better performance.
CaseMethodsObj. Func. ValueIterationNo. of Design VariablesTime (s)
SS BeamSIMP141.58850012,800727.91
GeNN136.786138643191.42
Table 3. Investigated architectures of GeNN used for topology optimization.
Table 3. Investigated architectures of GeNN used for topology optimization.
Configure AConfigure BConfigure CConfigure D
20 × 10 adaptive ARBFs
Linear: F→5Linear: F→10Linear: F→20Linear: F→10
Linear: 5→5, tanhLinear: 10→10, tanhLinear: 20→20, tanhLinear: 10→10, tanh
Linear: 5→1, SigmoidLinear: 10→1, SigmoidLinear: 20→1, SigmoidLinear: 10→10, tanh
Linear: 10→1, Sigmoid
Table 4. Final objective values under different initial penalty parameters (λ0) across representative benchmark problems.
Table 4. Final objective values under different initial penalty parameters (λ0) across representative benchmark problems.
Obj. Func. Value λ 0 = 1 λ 0   = 5 λ 0 = 10
Problem
Michell Beam17.324517.078817.5695
Cantilever-high-rise38.118338.456838.4498
Cantilever90.277189.303592.5840
Table 5. Investigated architectures for the ablation study.
Table 5. Investigated architectures for the ablation study.
Configure AConfigure BConfigure C
20 × 10 adaptive ARBFsLinear: 2→200Linear: 2→200, tanh
Linear: F→10Linear: 200→10Linear: 200→10
Linear: 10→10, tanhLinear: 10→10, tanhLinear: 10→10, tanh
Linear: 10→1, SigmoidLinear: 10→1, SigmoidLinear: 10→1, Sigmoid
Table 6. Sensitivity of the proposed GeNN to ARBF initialization (covariance and center) and random seeds.
Table 6. Sensitivity of the proposed GeNN to ARBF initialization (covariance and center) and random seeds.
ProblemCaseSettingObj. Func. ValueIterations
Cantilever
beam
Sigma sensitivity σ i n i t = 0.8, uniform grid, seed = 99987.8817117
σ i n i t = 1.1, uniform grid, seed = 99989.3035126
σ i n i t = 1.5, uniform grid, seed = 99989.0494125
Center initializationuniform grid, σ i n i t = 1.1, seed = 99989.3035126
perturbed grid (±0.2 h), σ i n i t = 1.1, seed = 99989.2593135
Seed stabilityseeds = {1, 999, 2025}90.3783 ± 0.5490102.00 ±17.66
Michell BeamSigma sensitivity σ i n i t = 0.8, uniform grid, seed = 99917.1787153
σ i n i t , uniform grid, seed = 99917.0788138
σ i n i t = 1.5, uniform grid, seed = 99917.5777157
Center initializationuniform grid, σ i n i t = 1.1, seed = 99917.0788138
perturbed grid (±0.2 h), σ i n i t = 1.1, seed = 99917.3404115
Seed stabilityseeds = {1, 999, 2025}17.1370 ± 0.1428127.67 ± 7.32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Li, S.; Lei, Z.; Du, G.; Zhang, X.; Chen, G.; Qian, W. A Geometric-Enhanced Neural Network Method for Scalable and High-Resolution Topology Optimization. Symmetry 2026, 18, 537. https://doi.org/10.3390/sym18030537

AMA Style

Zhang L, Li S, Lei Z, Du G, Zhang X, Chen G, Qian W. A Geometric-Enhanced Neural Network Method for Scalable and High-Resolution Topology Optimization. Symmetry. 2026; 18(3):537. https://doi.org/10.3390/sym18030537

Chicago/Turabian Style

Zhang, Lei, Shiqiang Li, Zhichu Lei, Guangzhe Du, Xiao Zhang, Guanbin Chen, and Wenliang Qian. 2026. "A Geometric-Enhanced Neural Network Method for Scalable and High-Resolution Topology Optimization" Symmetry 18, no. 3: 537. https://doi.org/10.3390/sym18030537

APA Style

Zhang, L., Li, S., Lei, Z., Du, G., Zhang, X., Chen, G., & Qian, W. (2026). A Geometric-Enhanced Neural Network Method for Scalable and High-Resolution Topology Optimization. Symmetry, 18(3), 537. https://doi.org/10.3390/sym18030537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop