Next Article in Journal
Mechanical Characteristics and Safety Evaluation of Tunnel Lining Structures in Karst Areas Under Heavy Rainfall Conditions
Next Article in Special Issue
Extreme-Value Combination Rules for Tower–Line Systems Under Non-Gaussian Wind-Induced Vibration Response
Previous Article in Journal
AI-Driven Green Building Technology Innovation: Knowledge Structure, Evolution Trends, Research Paradigms and Future Prospects
Previous Article in Special Issue
Seismic Response of Variable Section Column with a Change in Its Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach

School of Industrial Design & Architectural Engineering, Korea University of Technology & Education, 1600 Chungjeol-ro, Byeongcheon-myeon, Cheonan 31253, Republic of Korea
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(10), 1753; https://doi.org/10.3390/buildings15101753
Submission received: 15 April 2025 / Revised: 13 May 2025 / Accepted: 14 May 2025 / Published: 21 May 2025

Abstract

:
This study proposes an Index-Based Neural Network (IBNN) framework for the static analysis of truss structures, employing a Lagrangian dual optimization technique grounded in the force method. A truss is a discrete structural system composed of linear members connected to nodes. Despite their geometric simplicity, analysis of large-scale truss systems requires significant computational resources. The proposed model simplifies the input structure and enhances the scalability of the model using member and node indices as inputs instead of spatial coordinates. The IBNN framework approximates member forces and nodal displacements using separate neural networks and incorporates structural equations derived from the force method as mechanics-informed constraints within the loss function. Training was conducted using the Augmented Lagrangian Method (ALM), which improves the convergence stability and learning efficiency through a combination of penalty terms and Lagrange multipliers. The efficiency and accuracy of the framework were numerically validated using various examples, including spatial trusses, square grid-type space frames, lattice domes, and domes exhibiting radial flow characteristics. Multi-index mapping and domain decomposition techniques contribute to enhanced analysis performance, yielding superior prediction accuracy and numerical stability compared to conventional methods. Furthermore, by reflecting the structured and discrete nature of structural problems, the proposed framework demonstrates high potential for integration with next-generation neural network models such as Quantum Neural Networks (QNNs).

1. Introduction

Plane and space trusses are widely used in structural systems in architecture, civil engineering, mechanical engineering, and aerospace engineering. These systems exhibit a high strength-to-weight ratio, efficient material utilization, and excellent performance under complex loads by reliably distributing forces. Such structural advantages stem from their mechanical properties, whereby members are pin-connected at the joints and transmit only axial forces without significant momentum transfer. Although trusses are discrete systems, they display a mechanical behavior similar to continua systems. Therefore, the accurate analysis and physical behavior prediction of trusses plays a crucial role in the optimal design, damage assessment, and maintenance planning of structural systems [1].
Traditional approaches to truss analysis include the force method [2,3], finite element method (FEM), and direct minimization of the total potential energy [4,5]. However, these methods are computationally expensive, particularly for large-scale systems with complex boundary conditions [6] or when dealing with material and geometric nonlinearities [7,8]. Specific problems such as dynamic behavior [9,10], structural stability [11], optimization (e.g., topology, shape, and weight) [1,12,13], and shape control [14,15] have demanded the development of suitable methods, leading to a growing interest in advanced methodologies and surrogate modeling approaches. With the recent development of various neural network architectures, neural networks have attracted significant attention for structural engineering applications in trusses. They have been widely applied for structural analysis and optimization as well as for evaluating and verifying neural network architectures (see Section 2.1).
The emergence of neural networks has been recognized as a powerful tool for approximating complex functions and solving high-dimensional nonlinear problems. Over the past few years, various data-driven neural network architectures have been developed and applied not only in image [16] and audio processing [17,18,19] but also in numerous structural engineering fields such as structural behavior prediction [20,21]. However, data-based neural networks face significant challenges when they are applied to physical models. In particular, most supervised learning methods often lack robustness and generalization capabilities when trained using inherently limited datasets [22,23].
With the growing emphasis on physics-informed learning, research has transitioned from purely data-driven models to Physics-Informed Neural Networks (PINNs) [22], which demonstrate superior performance when guided by prior physical knowledge [23]. PINNs have emerged as powerful DNN frameworks for approximating solutions to governing equations in various fields including fluid and continuum mechanics, elasticity, and stochastic systems [24,25,26]. Methods such as weak-form loss functions [27], energy approach [28,29], and domain decomposition [30] have been introduced to address challenges such as discontinuities and convergence. PINNs are tailored using physics-based loss functions, such as the total potential energy [28], and decomposable architectures, such as domain-wise PINNs [31] and separable PINNs [32,33].
Recently, structural engineering has emerged as one of the key domains where PINNs are actively employed to address complex mechanical problems. Building upon the success of PINNs in fluid mechanics, elasticity, and multi-physics systems, recent studies have applied PINNs to various structural engineering tasks, including Structural Health Monitoring (SHM), damage detection, inverse problem solving, and dynamic response prediction. Al-Adly and Kripakaran [34] proposed a PINN-based SHM approach to estimate full-field deformations and internal stresses of Kirchhoff–Love plates using sparse sensor data. Kalimullah et al. [35] employed a PINN integrated with Multi-Fidelity (MF) modeling and Transfer Learning (TL) to accurately localize Acoustic Emission (AE) signals in CFRP composite panels. Liu and Meidani [36] effectively identified parameters of nonlinear dynamic systems using a PINN that incorporates a Multi-Physics (MP) damping model. Additionally, Yang et al. [37] combined a Reduced-Order Model (ROM) with a PINN to develop the FRINO (FE Reduced-Order Model-Informed Neural Operator) framework for accurately predicting vibration responses of large-scale structures. These studies highlight the potential of PINNs as a promising tool that can complement or even replace traditional analysis techniques in solving structural engineering problems involving limited data and complex physical constraints [38,39].
For boundary-value problems, Physics-Informed Neural Networks (PINNs) typically utilize randomly sampled boundary and collocation points as inputs to compute losses based on physical residuals. This strategy enables the network to generalize across continuous spatial domains [33,40,41]. A similar approach has been adopted for truss structures, where nodal coordinates or member geometries are often used as input features (see Section 2.1.3). However, unlike continua, truss structures are inherently discrete systems in which physical interactions are limited to directly connected members. Therefore, interpolating arbitrary spatial points, while essential for continuous media, is not required for discrete systems like trusses, where the mechanical behavior is fully determined by nodal connections and member forces; that is, PINNs are inherently well-suited for continuous systems but less so for discrete structures.
To address this, recent studies have explored alternative input domains based on design parameters or data-driven configurations [21,42,43]. While these methods alleviate some issues, they often involve high-dimensional inputs and indirect computation of member forces, limiting their efficiency and scalability. A more effective approach for truss structures is to redefine the input space using index-based representations that capture their discrete and topology-driven nature. Since only nodal outputs are physically meaningful, replacing spatial coordinates with indices yields equivalent analytical results (see Section 2.2). This avoids unnecessary spatial interpolation and enables efficient structural analysis. These considerations highlight the need for neural network models specifically designed to reflect the discrete and connectivity-driven behavior of truss systems.
Hence, in this study, we propose an index-based neural network framework that addresses the discrete nature of truss structures and the limitations of the FEM-based analysis. Instead of coordinate-based inputs, spatial and element data are mapped to unique indices (e.g., member ID and node ID), aligned with the discrete characteristics of trusses, where interpolation lacks physical meaning (see Section 4.1). The key features of the framework are as follows: (1) the use of integer indices enables a compact and topology-invariant input representation; (2) mechanical constraints from the force and displacement methods are embedded in the loss function; (3) the augmented Lagrangian method enhances accuracy and stability; (4) axial forces and nodal displacements are predicted via independent and identical networks, enabling direct force estimation; and (5) multi-index mapping allows flexible domain decomposition and scalability. Based on this design, the main contributions of this study are as follows.
  • Proposal of an Index-Based Neural Network (Index-Based NN): a generalized model was proposed by defining coordinate-based input information required for truss analysis using integer indices to simplify the input structure.
  • Simplification and scalability of the input domain: by constructing a discrete input domain rather than using coordinate-based data, both the generality of problem definition and domain scalability were achieved.
  • A mechanics-informed loss function was constructed by implementing linear systems of equations, based on the force method and displacement method, within an augmented Lagrangian framework to incorporate physical constraints quantitatively.
  • Flexible domain separation based on multi-index mapping: various types of indexed data (e.g., member and node information, connectivity, loading conditions) were mapped and separated to improve training efficiency, model accuracy, and representational power.
  • Numerical validation and performance evaluation: the accuracy and scalability of the proposed framework were verified through 2D and 3D truss examples, and the effectiveness of the loss functions and penalty parameters was comparatively assessed.
The remainder of this paper is organized as follows. Section 2.1 presents a literature review of existing ANN-based approaches applied to truss structures and discusses their structural characteristics, as well as the motivation and potential of the proposed index-based neural network. Section 3 provides the theoretical foundation and formulation of the force method used in truss analysis. In Section 4, the overall framework of the proposed index-based neural network is introduced, along with the implementation of the mechanics-informed loss function based on the force method and the augmented Lagrangian framework. Section 5 evaluates the applicability and accuracy of the proposed model through various numerical examples and compares its performance with those of existing methods. Finally, Section 6 summarizes the key findings of this study and discusses future research directions.

2. Preliminary

2.1. Literature Review of ANN Model for Truss System

Research on Artificial Neural Networks (ANNs) for truss structures initially focused on shape and topology optimization. This research has since expanded to include broader areas such as structural analysis, reliability assessment, and damage detection. Early studies have employed simple network architectures with a single hidden layer. However, as structural problems become increasingly complex, data-driven (DD) Deep Neural Networks (DNNs) are being adopted. More recently, the introduction of Physics-Informed Neural Networks (PINNs) has established DNNs as powerful tools for physics-based structural analyses. ANNs are well-suited as surrogate models for structural problems because they can effectively approximate complex mathematical models, exhibit strong generalization capabilities for high-dimensional nonlinear problems, and provide rapid predictions once trained. Owing to these advantages, ANN applications in truss structures have been utilized for building expert systems based on large datasets, providing surrogate solutions to specific analysis problems, and replacing and complementing parts of traditional computational workflows. However, depending on the target problem and analysis objective, certain common patterns are observed in network architectures, input–output configurations, and learning schemes. Analyzing these characteristics is crucial for designing an ANN framework suitable for truss structures.

2.1.1. Optimization and Surrogate Model

Truss optimization has progressed significantly since a 2-level model, and using a 3-layered neural network was proposed by Hajela and Berke [44]. Hajela and Berke [45] introduced a neural network for predicting nodal displacements from member cross-sectional areas and proposed a functional-link network using scaling areas. Later, Berke and Hajela [46] distinguished two main approaches: NN-Assisted Optimization (NAO), in which the network aids in computation, and Direct Optimization for Expert systems (DOEs), which learns from large datasets.
The NAO has evolved through integration with methods such as Evolution Strategies (ESs) [47] and Differential Evolution (DE) [48,49,50]. GNNs have also been combined with PSO to reflect truss topology [1], whereas RBF-NNs have been applied to reliability analysis [51]. These models generally use cross-sectional areas a R n b as inputs, and predict displacements d R n f or forces t R n b , which sometimes include external loads. In GNNs, nodal features include coordinates X and loads P , P b c with edge features given by a . DOE, also rooted in Hajela and Berke [44], was advanced through hybrid and multinetwork models [52,53]. Recent studies have included DE-based data generation for training [54] and ANN-driven autodesign of steel frames without nonlinear analysis [42]. DOE-type models directly map structural parameters to outputs such as optimal areas, total weight, or performance functions.
Unlike NAO, DOE-based neural networks use input layers consisting of structural parameters, such as external loads P , P θ R 2 n P , dimensions L 1 , L 2 , h R 3 , and cross-sectional properties S C , S B R 18 , typically represented as P 1 , , P n R n . The output is usually the optimal area vector a . Although both the NAO and DOE rely on supervised learning with large datasets, a new paradigm, Direct Optimization for a Single target (DOS), has emerged through physics- or mechanics-informed DNNs. Rather than training expert systems, DOS directly derives optimal solutions via unsupervised learning. For example, Mai et al. [55] predicted the cross-sectional areas that minimized the weight or volume using a penalty-based objective function with equality and inequality constraints. Bayesian optimization was used for hyperparameter tuning in Mai et al. [56] and stress-constrained energy-based PINNs were proposed by Mai et al. [57]. DOS-based models consider spatial coordinates X i R n d as the input and output a i R or ( a i , t r , i ) R 2 , where t r , i is the internal force. The inputs may be defined by member centroids X m , i or nodal pairs ( X i , i , X j , i ) , leading to models of the form F θ : R n d R or F θ : R 2 n d R .

2.1.2. Structural Analysis and Identification

In truss analysis and damage estimation, Neural Networks (NNs) serve as auxiliary solvers or surrogate models, following either NN-assisted or direct analysis approaches. Alam and Berke [58] introduced an NN-Assisted Analysis (NAA) model approximating stress–strain relationships ε F θ ( ε ) = σ to capture material nonlinearity. Alternatively, Kaveh and Dehkordi [59] developed an expert system for rib domes using backpropagation and radial basis function (RBF) networks by applying the DAE concept to predict displacements or forces with ( R 2 , N + ) inputs. Subsequent DAE-based studies extended this approach. Nguyen and Vu [60] proposed a binary classification model for failure detection using cross-sectional areas a R n b , g . Lee et al. [21] introduced a DD-ML-based DNN that maps loading data to displacements and plastic hinge states as F θ : R n L + n P R n f + n y . Lieu et al. [61] developed a DNN-based reliability model F θ : R n b + 2 R using structural properties to predict performance functions. Parallel efforts using PINN-based DAS models include Mai et al. [62], which minimizes total potential energy Π p in an unsupervised framework F θ : R n d R n d , and Mai et al. [63], which predicts instability through tangential stiffness singular values, with outputs ( d i , λ ) R n d + 1 . For supervised DAS, Le-Duc et al. [43] proposed FEINN, a surrogate model that predicts displacements from design parameters using FEA-informed loss, although their study focused on continua. For damage detection, Lee et al. [64] and Lieu [65] applied supervised DAE models using frequency- or acceleration-based inputs to estimate damage ratios r d R n b . The former uses ( ω , ϕ ) R 3 + 3 n f , whereas the latter focuses on acceleration. More recently, Mai et al. [66] proposed a PINN-based unsupervised DAS model with a loss function that combines M D L A C ( e ^ ) and flexibility F ( e ^ ) tuned via Bayesian optimization. The model maps nodal coordinates X i R n d to member-wise damage ratios r d , i R using F θ : R n d R .

2.1.3. DNN Model’s IO Structures for Truss System

In previous studies on Artificial Neural Networks (ANNs) for truss structures, the most commonly used architecture involves using the cross-sectional area vector a as the input and predicting either the nodal displacement d or member stress t as the output. These approaches fall under the category of NN-Assisted Optimization (NAO). Representative network structures include F θ : R n b R n f and F θ : R n b R n b [45,46,47,48,49,50,51]. This implies that the neural network F θ maps the input dimension a R n b to either the output dimension d R n f or t R n b , depending on the target. The number of input and output neurons may vary depending on how the members are grouped or whether the observed degrees of freedom are reduced. On the other hand, DOE (Direct Optimization for Expert systems) or DAE (Direct Analysis for Expert systems) models utilize a broader range of input parameters (e.g., L, h, P, S C , ω , ϕ ), and the outputs vary depending on the objective, including a , total weight W, displacement d , safety factor f s , damage ratio r d , and performance function F p [42,43,52,53,54,59,61,64,65].
Since the introduction of Physics-Informed Neural Networks (PINNs), architectures that use spatial coordinates X R n d as inputs and predict target solutions at either nodes or members have become standardized. For instance, nodal displacement prediction typically follows F θ : R n d R , whereas member stress or damage ratio prediction follows F θ : R 2 n d R . In these cases, the input is defined as either the center coordinate of a member X m or the nodal coordinate pair ( X i , X j ) [55,56,57,62,63,66].

2.2. Validation of Index-Based Input via Coordinate Mapping

In this section, based on the 6-bar plane truss example presented in Mai et al. [62], we verify that a Neural Network (NN) using real-valued nodal coordinates X j R n d and an integer-based index i j N n d yields equivalent results. This serves as a crucial validation of the feasibility and effectiveness of index-based domain mapping. As shown in Figure 1a, the example aims to predict nodal displacement d by minimizing the total potential energy Π p using an unsupervised learning-based NN defined as F θ ( X j ) : R n d R n d . The material properties follow the same conditions as those in Mai et al. [62], and the external loading is P = 150 kN at node 4 in the x-direction, as shown in the figure. To generate index-based input data, the spatial domain was mapped as ( x , y ) ( i , j ) and converted into integer form, as illustrated in Figure 1b. For example, node 3, which is located at ( 3.0 , 4.0 ) is mapped to ( 2 , 2 ) . The NN was trained using a single hidden layer containing 10 neurons for 1000 epochs. As shown in Table 1, the resulting total potential energy Π p agrees with the reference values reported by Mai et al. [62], thereby confirming the reliability of the approach.
These results suggest that a neural network can effectively learn structural information using only integer indices, without requiring real-valued spatial coordinates, indicating that coordinate-based input structures are unnecessary. Although this example employs index mapping with input vectors of the same dimension, the problem can also be simplified by transforming it into a one-dimensional input structure such as F θ ( i ) : N + R , where i refers to the number of elements or nodes. This type of DNN framework is described in detail in Section 4.

3. Force Method and Solutions in Truss Analysis

A truss structure is formed by connecting one-dimensional elastic members that carry only axial forces at their joints. Each member is governed by the following equilibrium equation under axial loading P along its length L:
E A u , x x + P x = 0 , x [ 0 , L ] ,
where E A denotes the axial rigidity, and u ( x ) is the axial displacement. This equation defines a Boundary Value Problem (BVP) with essential (Dirichlet) and natural (Neumann) boundary conditions. However, the analysis of an entire truss system involves recombining the boundary conditions of individual members such that the global equilibrium, compatibility, and constitutive relationships are satisfied. Among conventional methods, the finite element method (FEM) derives the element stiffness matrix k from the governing equation of each member and assembles them to construct the global stiffness equation:
K d = p
Here, K , d , and p denote the global stiffness matrix, nodal displacement vector, and nodal external force vector, respectively. The matrix K is symmetric and positive definite, and it can be interpreted as the Hessian of the total potential energy. This formulation enables an analysis of system stiffness, the load–displacement response, and structural stability. However, member axial forces can only be computed as post-processing from the obtained displacements, and inextensible deformations are difficult to identify directly. In contrast, the force method derives a direct relationship between external loads and member forces, allowing for a more intuitive modeling of internal force approximation. This approach treats force and displacement systems independently, providing clarity in identifying key variables and facilitating the reconciliation of competing requirements.

3.1. Basic Equations and Mechanical Behaviour

3.1.1. Basic Equations of Force Method

In an n d -dimensional spatial domain, consider a truss structure composed of n b members and n j joints, where n c degrees of freedom are constrained. The equilibrium relation between the external force vector p R n f and member force vector t R n b is given by
A t = p
Here, A R n f × n b is the equilibrium matrix, and the number of free degrees of freedom is defined as n f = n d × n j n c .
Meanwhile, compatibility describes the relationship between the member elongation vector e R n b and nodal displacement vector d R n f , expressed as
B d = e
Here, B R n b × n f is a compatibility matrix that has a transposed relationship with the equilibrium matrix A . The two matrices have the same rank, given by r = rank ( A ) = rank ( B ) . This transpose relationship is derived from the energy principle, particularly the principle of virtual work, δ e T t = δ d T p . Substituting δ e T = δ d T B T into B d = e yields δ d T B T t = δ d T p . Therefore, from Equation (3), it follows that B T = A , indicating that the two matrices are mutually dependent. Finally, the flexibility relationship connecting the equilibrium and compatibility conditions is as follows:
F t = e
Flexibility matrix F R n b × n b is a diagonal matrix whose diagonal entries represent the flexibility of each member.

3.1.2. Mechanical Behaviour via SVD ( A )

The coefficient matrix inherently contains information about the structural system’s behavior, including self-stress states ( s = n b r ) and mechanism modes ( m = n f r ). Here, s = 0 and s > 0 indicate statically determinate and indeterminate systems, respectively, while m = 0 and m > 0 represent kinematically determinate and indeterminate systems, respectively. The structural behavior can thus be classified into four types based on the combination of s and m: (1) s = 0 , m = 0 , (2) s > 0 , m = 0 , (3) s = 0 , m > 0 , and (4) s > 0 , m > 0 . In general, trusses are designed to be kinematically determinate systems with m = 0 , while structures such as cable nets and tensegrities often exhibit m > 0 . Each of these modes can be extracted via the Singular Value Decomposition (SVD) of the coefficient matrix A :
A = U V r 0 0 0 W T
where V r = diag ( { v i } i = 1 r ) and v i are the singular values. The matrix U = [ U r U m ] is composed of the column space U r and the left nullspace U m , while W = [ W r W s ] consists of the row space W r and the nullspace W s . To satisfy initial equilibrium, the external load vector p must lie in the span of U r , and the member force vector t must lie in the span of W r . The matrix U m represents the basis for zero-energy deformations (mechanism modes), and any component of p lying in this space indicates that the system cannot achieve initial equilibrium. Similarly, W s corresponds to the self-stressable modes, forming the basis for the complementary homogeneous solution of Equation (3); these modes represent internal stress states that satisfy the equilibrium without external loads [8,67].

3.2. Solutions: Bar-Force t and Displacement d

3.2.1. Bar-Force Vector t

In the general solution of Equation (3), t = t p + t s , the particular solution t p is the least squares solution obtained using the Moore–Penrose inverse of A , given by A + = W r V r 1 U r T . The homogeneous solution t s is expressed as a linear combination of the nullspace basis vectors W s , which satisfy the homogeneous system A t = 0 :
t = A + p + W s α
The member elongation vector is given by e = F t . To satisfy compatibility, and using the relationship B = A T , the condition W s T e = 0 must hold. Therefore,
α = W s T F W s 1 W s T F t p
Substituting this result into the general form yields:
t = t p W s W s T F W s 1 W s T F t p
Furthermore, when considering inelastic eigenstrains such as lack of fit, thermal expansion, residual stress, or prestress [68], the general solution can be derived using the modified elongation vector e = e 0 + F t , which includes the initial strain component e 0 .

3.2.2. Displacement Vector d

Similar to t , the general solution of the compatibility Equation (4) for d can be expressed as the sum of a particular solution d p and a complementary homogeneous solution d m . However, in a kinematically indeterminate system with m > 0 , the existence of mechanisms prevents the displacement field from being uniquely determined by equilibrium and compatibility conditions alone. This occurs when the external force vector p lies in the left nullspace of A , i.e., U m , resulting in the absence of a valid solution. These mechanisms can be classified as either infinitesimal or finite. Except for first-order infinitesimal mechanisms that are stabilized by prestress, most exhibit inextensional behavior, which cannot be adequately captured by linear solutions alone [7,8,69]. In contrast, typical truss structures are kinematically determinate systems with m = 0 , where only the particular solution d p needs to be determined. In such cases, it is often more practical to use the direct relationship between d and p instead of relying solely on (4). Since F is a full-rank diagonal matrix, its inverse F 1 always exists. Substituting t = F 1 e into Equations (3) and (4) yields:
A F 1 B d = p
which defines the global stiffness matrix as K = A F 1 B . If K is non-singular, the particular displacement solution can be obtained as:
d = U r V r 1 W r T F t p = B + e

3.3. Boundary Condition and Reaction

Equation (3) represents the equilibrium relationship between the member forces t and the external forces p . When the prescribed boundary displacement b b R n c is given as 0 , the equilibrium equation can be expressed in an augmented form as follows [6]:
A 0 A b I t p r = p 0
Here, A b denotes the directional components of the members connected to the boundary nodes, and p r represents the reaction forces. The boundary equation A b t = p r serves as a sufficient condition to uniquely determine p r when t is given.
Meanwhile, in the case of the stiffness equation with boundary conditions included, the system can be expressed as:
K K b T K b K c d d b = p p r
Here, the boundary equation K b d = p r provides a sufficient condition to uniquely determine p r from the displacement d .
In summary, under given boundary conditions, both the equilibrium equation and the stiffness equation uniquely determine the reaction force p r based on either the computed internal forces t or the displacements d .

4. Proposed Index-Based Neural Network Framework

4.1. Overview of the Proposed Framework

The static analysis of truss structures involves determining the nodal displacements and internal member forces that satisfy the mechanical conditions under given external loads. Traditionally, such problems are addressed based on theoretical formulations, such as the Finite Element Method (FEM) or the force method. In this study, we developed an approximation model for truss analysis using a Deep Neural Network (DNN) that incorporates an index-based input structure based on the force method.
As discussed in Section 2.2, the truss system allows each structural member to be identified using an integer index, thereby making it feasible to use an index domain as the input instead of spatial coordinates. Specifically, each member or node of the structure is defined by a positive integer i N + , and the physical information is constructed by mapping the index to the corresponding spatial coordinates X and design parameters. Thus, the neural network model for approximating member forces and nodal displacements can be expressed as follows:
F θ t ( i ) : N + R , F θ d ( i ) : N + R n j
Here, F θ t and F θ d denote independent neural networks for predicting the member forces and nodal displacements, respectively, where the input index i refers to either a structural member or a node. Unlike conventional FEM-based methods, which calculate member forces indirectly from nodal displacements, this framework enables the direct and independent approximation of both outputs for kinematically determinate systems.
Figure 2 illustrates the concept of index-domain mapping in the proposed Index-Based Neural Network (IBNN). In a truss structure defined in an n-dimensional Cartesian space, each member is topologically defined by its two end nodes, X i and X j , and the corresponding member force t k ( k = 1 , , n b ) satisfies the equilibrium with the external load. Although this model originally requires a 2 n -dimensional input, the mapping to an index domain i allows the approximation function to be reformulated in a simpler one-dimensional form. The approximation model for nodal displacement d k ( k = 1 , , n f ) can also be expressed by converting the spatial domain information of the node X i into the index space. In this way, displacement can be modeled using the same index-domain-based one-dimensional input structure as the internal force. Here, the influence of member directionality and the local coordinate system is eliminated during the mapping process to the index domain, as the input is rearranged into an order-based (index-based) format that is independent of physical direction. This is because the input data are defined not over a coordinate-based continuous domain but rather over a structurally indexed discrete domain, as illustrated in the concept diagram of the index domain in Figure 2.
The IBNN framework has the following characteristics: simplified input representation with high normalization and scalability, modular and independent neural network structures for member forces and nodal displacements, implementation of a mechanics-informed neural network based on the force method (see Section 3) using an augmented Lagrangian formulation, and support for multi-index mapping and domain separation. Furthermore, multiple design parameters can be applied to a single index to enhance analytical flexibility. This allows for the functional separation of learning without explicitly partitioning the structural domain, which improves learning performance for complex structures and various analysis objectives. The proposed structure is extensible and can be applied to a wide range of structural analysis tasks, including optimization problems, damage detection, and design-parameter-based simulations.

4.2. Problem Formulation

The Boundary Value Problem (BVP) of a truss structure is governed by mechanical relations. For the member forces t , it is defined by Equations (9) and (12), and for the nodal displacements d , by Equations (11) and (13). These can be generalized into the following standard form of the problem.
Find y R n b ( or R n f ) such that E : = A y ( X , z ) B r b ( X b , y ) = 0
Here, A and B denote mechanically defined governing equations and boundary conditions, respectively. X Ω s denotes the nodal coordinates in the spatial domain, and X b Ω s represents the spatial coordinates associated with the boundary conditions, where Ω s Ω s . Vector z Ω p contains the physical parameters of the truss, and r b R n c is the parameter corresponding to the boundary constraints.
In Equation (15), the approximation model for internal member forces is given by y t R n b , while the model for nodal displacements is given by y d R n f . As discussed in Section 3.3, the boundary-related parameters are uniquely determined by t and d , which serve as sufficient conditions for satisfying the boundary operator constraints [31]. Therefore, these constraints are inherently embedded within mechanical formulations. Accordingly, Equation (15) can be interpreted as a generalized mechanics-informed boundary-value problem.
Find y R n b ( or R n f ) such that E : = A y ( X , z ) = 0
Numerical approaches for solving boundary value problems, such as Equation (16), include a variety of classical methods such as displacement and force methods. Additionally, Deep Neural Networks (DNNs) can be employed to develop approximation models using data-driven learning. This problem can be formulated using a DNN-based prediction model, as follows:
y ( X , z ) y ^ ( X , z , θ ) = F θ ( X , z ) , X Ω s , z Ω p
Here, F θ denotes the neural network function, Ω s = X R n d × n j x j R n d , j = 1 , , n j represents the spatial domain of nodal coordinates, and Ω p R n b × n p denotes the parameter domain, where n p is the number of parameters defined for each member.
Neural network-based studies for solving boundary value problems such as Equation (15) have been developed using both data-driven approaches and physics- or energy-informed models such as Physics-Informed Neural Networks (PINNs) (see Section 2.1). Various libraries have been developed to construct and train neural-network models. Python libraries, such as TensorFlow and PyTorch, are commonly used, whereas MATLAB provides a convenient tool for model development and training.

4.3. Introduction of Index-Based Neural Network Architecture

4.3.1. Deep Neural Network Structure

The Neural Network (NN) used in the proposed IBNN framework adopts a Deep Neural Network (DNN) architecture composed of L layers. Generally, a DNN is defined as a function F θ : R n i R n o that maps an input vector x ¯ R n i to an output vector y ¯ R n o . The set of trainable parameters is denoted by θ = { W ( l ) , b ( l ) } l = 1 L and includes the weight matrices and bias vectors for each layer. The operation of each hidden layer is defined as
z ¯ ( 0 ) = x ¯ , z ¯ ( l ) = σ ( l ) W ( l ) z ¯ ( l 1 ) + b ( l ) , l = 1 , 2 , , L
Here, σ ( l ) is the nonlinear activation function applied at the lth layer and the final output is given by y ¯ = z ¯ ( L ) . When expressed in the form of a composite function, the DNN model can be written as
y ¯ = F θ ( x ¯ ) = σ ( L ) A ( L ) σ ( 1 ) A ( 1 ) ( x ¯ )
where A ( l ) ( · ) = W ( l ) ( · ) + b ( l ) denotes the affine transformation in the layer l.
In this study, an Index-Based Neural Network (IBNN) framework is proposed, in which the input–output structure of the DNN model is defined based on integer indices, and training is performed through the design of a mechanics-informed loss function for predicting member forces and nodal displacements. Here, the input index i N + is normalized using z-score normalization with the mean μ i and standard deviation σ i to reduce scale differences and enhance stability. The output is normalized using the maximum value of the given reference response max ( y e ) . Therefore, all losses and errors associated with the predicted outputs presented in this study are expressed in nondimensionalized form.

4.3.2. Index-Based Input Domain Mapping

The input to the proposed IBNN is given by an index i N + , through which various physical and mechanical information such as connectivity, nodal coordinates, axial stiffness, and external forces are embedded. The set of member indices is defined as M = { i N + i = 1 , 2 , , n b } and the set of node indices is N = { j N + j = 1 , 2 , , n j } .
First, the coordinates of node j N are given by x j R n d and stored in the nodal coordinate set X = x j R n d j N , which corresponds to the domain Ω s . Similarly, the external forces are given by p = p j R n d j N and each p j is defined according to the prescribed loading conditions.
For each member i M , the connectivity is returned by mapping C ( i ) = ( a ( i ) , b ( i ) ) , where the function C : M N × N defines structural connectivity. Cartesian product N × N is defined as ( i , j ) i N , j N . The axial stiffness of each member is also indexed by i and given by EA ( i ) = E A ( i ) j N .
The elements of the feature vector ϕ ( i ) constructed from such index-based information are used to define the loss function of the IBNN model. The specific components of ϕ ( i ) depend on the target prediction. For the prediction model of member forces denoted by t ^ ( i ) , the input feature is defined as follows:
t ^ ( i ) = F θ t ϕ t ( i ) where ϕ t ( i ) = x a ( i ) , x b ( i ) , p a ( i ) , p b ( i ) , E A ( i )
For the prediction of nodal displacements, model d ^ ( j ) is defined as
d ^ ( j ) = F θ d ϕ d ( j ) where ϕ d ( j ) = x j , p j , E A ( i ) i M
The IBNN models F θ t and F θ d are defined as F ^ θ : N + R . An effective reduction in input dimensionality was achieved by replacing the parameter space with index-based representations. Compared with coordinate-based models, this approach simplifies the model structure while simultaneously providing scalability and flexibility in the output dimensions and analytical formulation.
Furthermore, if the sets M or N are used as the dataset for a training epoch, the neural network model can be constructed without explicitly generating mechanical information for each index, as the operators A and B can be directly utilized. This significantly simplifies the neural network modeling process.

4.3.3. Multi-Mapping and Domain-Separated Neural Networks

The IBNN model can be extended by incorporating multi-indexing or domain separation. By defining a multi-mapping function, the input index i N + can be associated with a multi-output structure within a single DNN. For example, in the case of member force prediction, multi-mapping can be defined as
M m ( i ) = ( t ^ 1 ( i ) , t ^ 2 ( i ) , , t ^ m ( i ) ) , M m M
This corresponds to the model of the form F θ t ( i ) : N + R m .
For the nodal displacement prediction, because each node is associated with n d degrees of freedom, the original model follows the structure F θ d ( i ) : N + R n d . However, it can also be extended using multimapping, as
N m ( i ) = ( d ^ 1 ( i ) , d ^ 2 ( i ) , , d ^ m ( i ) ) , N m N
In this case, the neural network becomes F θ d ( i ) : N + R m n d .
In a domain-separated IBNN model, the same input index can still be provided; however, the output can be separated into different domains by routing the input into multiple subnetworks. In this framework, the multi-mapping functions associate outputs t i and d i with subsets of M m and N m , respectively.
This model is referred to as a composite separated model, in which a single index is used as the input but separate neural network structures are assigned. In other words, the model is composed of sub-networks centered on the input index, and the outputs of each sub-DNN are aggregated and jointly trained. This architecture enables functional separation of the output domains without the need for mathematically explicit domain decomposition, thereby providing structural scalability and analytical flexibility. The two-part separated model is expressed as follows:
F Θ ( i ) = F θ 1 ( 1 ) ( i ) , F θ 2 ( 2 ) ( i ) R n ( 1 ) × R n ( 2 )
Here, F θ j ( j ) denotes the individual neural network corresponding to the output domain j = 1 , 2 and Θ = { θ 1 , θ 2 } is the full set of trainable parameters for the model.
In Equation (24), the member force model is denoted by F Θ t ( i ) and the nodal displacement model by F Θ d ( j ) , where the corresponding index sets are defined as { i M s M s M } and { j N s N s N } , respectively.
The proposed composite-separated model also adopts a multimapping structure, in which a single index is mapped to multiple outputs. Although each subnetwork was independently constructed, the training process was performed using a single integrated loss function. Therefore, as in the cases of M m and N m , when training is conducted using a dataset constructed per epoch, the coefficient matrices can be directly utilized in the neural network modeling.

4.4. Mechanics-Informed Loss Function Design Using Augmented Lagrangian

4.4.1. Lagrangian Dual Optimization

To design the loss function of the neural network for predicting member forces or nodal displacements, the residual can be expressed using the Euclidean norm, and the formulation can be cast as an optimization problem with mechanical equality constraints, as follows:
min u u ^ 2 2 subject to A u ^ = p
where u ^ denotes the predicted solution obtained from the neural network model in Equation (17), A denotes the coefficient matrix corresponding to the equilibrium equations (or the stiffness matrix defined in Equation (10)), and p denotes the external force vector. One common approach to solving the equality-constrained optimization problem given in Equation (25) is to reformulate it as an unconstrained optimization problem using a quadratic penalty function.
Let the objective function be defined as J r ( θ ) = u u ^ 2 2 , and the physics-based constraint function as
C ( θ ) = p A u ^
Then, using J r ( θ ) , C ( θ ) , and a penalty parameter β , the constrained optimization problem in Equation (25) can be reformulated as the following unconstrained optimization problem:
arg min θ J β ( θ ) = arg min θ J r ( θ ) + β C ( θ ) 2 2
Here, the penalty parameter β is defined as an increasing scalar sequence of iterations, which influences the stability of the optimization system. However, the issue of stability optimization was not addressed in this study.
The optimization model given in Equation (27) is an approach for solving the constrained optimization problem. Alternatively, Equation (25) can be addressed using the Lagrangian multiplier method, which can be formulated as
arg min θ J λ ( θ ) = max λ 0 min θ J r ( θ ) + λ · C ( θ )
Here, λ R N is the Lagrangian multiplier vector, which is iteratively updated according to the learning rate η as follows:
λ ( t + 1 ) = λ ( t ) + η C ( θ ( t ) )
This approach establishes a framework for addressing the original optimization problem via its dual formulation, thereby offering an alternative route to achieve the optimal solution.
The Augmented Lagrangian method, which effectively combines two previously described approaches, was introduced by Hwang and Son [70]. This method augments the Lagrangian function J λ ( θ ) with an additional penalty term and is formulated as
arg min θ J λ , β ( θ ) = max λ 0 min θ J r ( θ ) + λ · C ( θ ) + β C ( θ ) 2 2
To find the optimal solution of Equation (30), the sequence of Lagrange multipliers λ is updated as follows:
λ ( t + 1 ) = λ ( t ) + β C ( θ ( t ) )
This type of method has been shown to converge stably without increasing β during the iterations [70].
In this study, a loss-function-based optimization model was constructed for the analysis of truss structures. Given an index as the input to the IBNN model, the resulting output is used to compute the loss function, which is iteratively minimized through parameter optimization. The classical Adam optimizer was adopted to update the model parameters during the training.

4.4.2. Loss Functions for Member Force and Nodal Displacement Prediction

Based on the Lagrangian dual-optimization framework, the loss function for the member force prediction model was defined. First, the residual loss function is given by
L r ( θ ) = 1 N t t ^ 2 2 = 1 N r 2 2
where r = t t ^ is the residual vector between the true member forces t and predicted forces t ^ obtained from model F θ t .
The quadratic penalty term is defined as follows:
L q ( β , θ ) = β p A t ^ 2 2
where β is the penalty parameter and L q quantifies the degree of constraint violation for the structural equilibrium condition. This term ensures the physical consistency of the prediction. Because the structural equilibrium condition is defined as p = A t , the penalty term can also be written as:
L q ( β , θ ) = β A ( t t ^ ) 2 2 = β A r 2 2
In addition, the Lagrangian term that directly incorporates the constraint can be defined using the Lagrange multiplier vector λ R N :
L L ( λ , θ ) = λ T ( p A t ^ ) = λ T ( A r )
Here, L L represents the Lagrangian term that explicitly enforces the constraint by directly penalizing the violation of the equilibrium condition through A r .
The total loss functions L r , L q , and L L , and the total loss function L t can be constructed. The optimization model based on the quadratic penalty function is expressed as
L t β ( β , θ ) = L r + L q = 1 N r 2 2 + β A r 2 2
where penalty parameter β is treated as a fixed constant.
The model based on the Lagrangian multiplier is given by
L t λ ( λ , θ ) = L r + L L = 1 N r 2 2 + λ T ( A r )
In this formulation, the Lagrange multiplier vector λ is updated at each iteration, as follows:
λ i + 1 = λ i + η ( A r )
The augmented Lagrangian method, which combines L t β and L t λ , is expressed as:
L t λ , β ( λ , β , θ ) = L r + L L + L q = 1 N r 2 2 + λ T ( A r ) + β A r 2 2
In this augmented formulation, the Lagrange multiplier λ is updated according to:
λ i + 1 = λ i + β ( A r )
In the proposed framework, the IBNN models for predicting member forces t and nodal displacements d share the same optimization structure, except for the design variables. For displacement d , the residual is defined as r = d d ^ and the coefficient matrix is given by A [ A F 1 B ] = K , where K represents the global stiffness matrix.
Accordingly, the residual loss function becomes L r = 1 N r 2 2 and the total loss function is defined as follows:
L t β ( β , θ ) = L r + β K r 2 2 L t λ ( λ , θ ) = L r + λ T ( K r ) L t λ , β ( λ , β , θ ) = L r + λ T ( K r ) + β K r 2 2
These loss functions follow the same update strategies for the Lagrangian multiplier λ and penalty parameter β as used in the member-force model.

4.5. Schematic Architecture of the Proposed IBNN Framework

This section presents a schematic representation of the overall structure of the proposed IBNN framework. The framework was designed to perform a static analysis of truss structures by constructing neural networks to predict either member forces or nodal displacements. Unlike traditional coordinate-based models, the input to the network consists of integer index data, rather than spatial coordinates. Each index is mapped to structural information such as members, nodes, and external loading conditions. This approach reduces the complexity of the input dimensionality and enables flexible extension to various structural parameters.
Furthermore, each neural network module is structured as an independent network, while sharing a common index domain that allows scalable learning via domain separation. The loss functions were designed using mechanics-informed constraints formulated using an augmented Lagrangian method.
Figure 3 schematically illustrates the overall architecture of the proposed IBNN framework. Subfigure (a) represents the case of a single index domain M (or N ), (b) shows the case of a multi-domain M m (or N m ), and (c) depicts the case of a separated domain M s (or N s ). Each index set satisfies the relations M m M , M s M , and when (b) and (c) are combined, M s M m also holds. The same relations apply to the sets defined on N . Subfigure (d) further illustrates the computational process from the index-based input structure to the output and application of mechanics-informed constraints, with a particular focus on the Lagrangian loss L L computed from the row vector A ( i ) of the force-method-based coefficient matrix and the residual r , explaining how equilibrium constraints are enforced.

5. Numerical Experiments

This section presents the results of the numerical experiments conducted on various truss structures to validate the effectiveness of the Index-Based Neural Network (IBNN) framework proposed in Section 4. The tested structures included spatial trusses, modular frame structures, and dome-shaped trusses, covering a range of geometric forms and complexities. The focus was on evaluating the accuracy, generalization performance, and applicability of the model to structural analysis problems.
All neural network training was performed under identical conditions using the hyperbolic tangent (tanh) function as the activation function and Adam optimizer for training. However, due to differences in the system scale and number of degrees of freedom, the depth and number of nodes in the hidden layers were adjusted for each example. This design allows for an effective representation of the input domain based on structural complexity. The corresponding loss function settings for each case are summarized in Table 2. The parameters β and λ used in each case are described in Equation (40). In Case 2, the parameter update for λ was applied using η β .
In this study, four quantitative metrics were employed to evaluate the performance of the proposed IBNN: Mean Squared Error (MSE, L 2 ), Relative Root Mean Squared Error (RMSE, R L 2 ), Maximum Absolute Error ( L ), and Coefficient of Determination ( R 2 ). Given a target vector y = { y i } i = 1 N and predicted values y ^ = { y ^ i } i = 1 N , R 2 is defined as follows:
R 2 = 1 i = 1 N ( y i y ^ i ) 2 i = 1 N ( y i y ¯ ) 2
where y ¯ denotes the mean of target values. An R 2 value closer to 1 indicates a higher prediction accuracy. These evaluation metrics jointly consider both absolute and relative error characteristics and are used to comprehensively assess the prediction accuracy and generalization performance of the model. In addition, the loss and error results of the neural network predictions presented in the tables and figures of this section are computed as normalized, nondimensional values; hence, physical units are not applied.

5.1. The 25-bar Space Truss

This section presents the structural analysis of the 25-bar space truss and discusses the corresponding results. The target model, illustrated in Figure 4, is a three-dimensional structure composed of 25 members and 10 nodes. External forces were applied at the two top nodes of the structure: at Node 1, a load of + 80 kN in the y-direction and 20 kN in the z-direction was applied, whereas at Node 2, a load of 80 kN in the y-direction and 20 kN in the z-direction was applied. The material properties used in the model are as follows: Young’s modulus E = 2.0 × 10 11 N / m 2 , density ρ = 7860 kg / m 3 , and cross-sectional area A = 10 mm 2 .
In this example, the objective is to predict the axial forces t in each member and displacements d under the given loading conditions. The IBNN architecture employed was based on a single-index domain structure, as shown in Figure 3, consisting of two hidden layers with 50 neurons each. The network was trained using the Adam optimizer with a learning rate of η i b n n = 0.001 for a maximum of 50,000 epochs.
The training results of the F θ t ( i ) model for predicting axial forces t are summarized in Table 3. The lowest L 2 values were obtained at β = 0.001 for Case 1, β = 0.01 for Case 2, and β = 0.0001 for Case 3. Although Case 1 showed stable training performance across all tested β values, Cases 2 and 3 failed to converge when β 0.01 , with a significant drop in R 2 scores. Similar trends are observed for the R L 2 and L metrics.
The changes in the total loss L t and L 2 over the epochs for the F θ t ( i ) model are shown in Figure 5. As shown in Figure 5a, when β = 0.01 in Cases 2 and 3, the model failed to converge even as the number of epochs increased, resulting in large errors. By contrast, for β = 0.0001 , all cases showed convergence, with Case 3 reaching more precise values than the others.
Unlike Case 1, which employed only a penalty term, Cases 2 and 3 computed a loss function based on a Lagrangian term. The effectiveness of the Lagrangian-based formulation is shown in Figure 5b. As shown, the loss functions for Cases 2 and 3 experienced significant changes during the initial training phase, which gradually diminished as L 2 converged to a stable range. In particular, Case 3 achieved the most accurate results. In addition, the cases of β = 0.01 for Cases 2 and 3 shown in Figure 5a correspond to divergent behaviors. The corresponding numerical values can be found in Table 3, and their convergence trends are visually represented in Figure 5c. Furthermore, the loss function variations for these cases, exhibiting divergence, can be observed in Figure 5d.
The training results of the F θ d ( i ) model for predicting displacements d are summarized in Table 4. The lowest L 2 value was achieved at β = 0.0001 for Cases 1, 2, and 3. As observed in the results of the t model, Case 1 showed stable training performance across all tested β values, whereas Cases 2 and 3 failed to achieve satisfactory values of L 2 and L t when β = 0.01 . This trend was also observed for other evaluation metrics. Although Case 1 yielded the best overall performance, the differences between the best results for each case were negligible.
The evolution of the total loss L d and L 2 across epochs for the F θ d ( i ) model is shown in Figure 6. As shown in Figure 6a, for β = 0.01 , Cases 2 and 3 are less precise than the other settings, even after many epochs. In contrast, all cases converged well when β = 0.0001 .
The effect of the Lagrangian term observed in the t model for Cases 2 and 3 is shown in Figure 6b. While the loss function in Case 1 decreased steadily throughout the training, the loss functions in Cases 2 and 3 exhibited significant fluctuations during the early training phase. As these fluctuations diminish, L 2 converges to a stable range, as shown in Figure 6a. Unlike Case 1, Cases 2 and 3 show more pronounced variations during the initial epochs but eventually achieve stable convergence in terms of L 2 as the training progresses (see Figure 6a).
In the F θ d ( i ) model, Case 2 and Case 3 with β = 0.01 exhibit significantly degraded training performance, as shown in Figure 6a. The numerical results presented in Table 4 are visually confirmed in Figure 6c, where the loss function in Figure 6d displays periodic oscillations, and the L 2 error pattern shows distinct variations. A similar tendency is also observed in the prediction results of the bar-force t , indicating that when using a Lagrangian-based loss function, setting a lower β value is crucial for ensuring training stability.

5.2. A Three-by-Three Square Grid Space Frame

An example presented in this section is the double-layer space frame, as illustrated in Figure 7. The target model consisted of a 3 × 3 grid structure with a unit grid spacing of D = 5.0 m , and the height of the upper layer was defined as H = 0.5 D 2 = 3.5355 m . The material used for the model was steel with a density of ρ = 7860 kg / m 3 and a Young’s modulus of E = 2.0 × 10 11 N / m 2 . All members were assigned the same cross-sectional area ( A = 10 cm 2 ) and material property. A uniformly distributed load of 30 kN in the z direction was applied to all nodes of the upper grid layer.
This example also aims to predict the axial forces t in each member and the displacements d under the given loading conditions. As in the example described in Section 5.1, the IBNN network structure used here is based on a single-index domain architecture, as shown in Figure 3, consisting of two hidden layers with 50 neurons each. The network was trained using the Adam optimizer with a learning rate of η i b n n = 0.001 and a maximum of 100,000 epochs.
The results in Table 5 correspond to the training performance of the axial-force prediction model F θ t ( i ) . As shown in the table, the β values yielding the lowest L 2 error were β = 0.01 for Case 1, β = 0.0001 for Case 2, and β = 0.001 for Case 3. The results from Case 1 were generally less accurate than those from Cases 2 and 3. However, Cases 2 and 3 diverge when β = 0.01 , as shown in Figure 8a. The best L 2 performance was achieved in Case 3 with β = 0.001 , whereas Case 2 with β = 0.0001 also exhibited a comparable precision, as illustrated in Figure 8a. These trends are further supported by the R L 2 and L metrics.
In the plots of the total loss L t and L 2 over the epochs shown in Figure 8, the result of Case 1 with β = 0.0001 demonstrates a relatively poor search speed and precision compared to other settings, where L t converges quickly to a low-precision plateau. In contrast, Case 2, with β = 0.0001 exhibited a significant variation during the initial learning phase. The best precision was achieved in Case 3 with β = 0.001 . Although β = 0.001 caused divergence in Case 2, it yielded the most accurate outcome in Case 3. Given that both Cases 2 and 3 incorporate Lagrangian terms, unlike Case 1, it is meaningful to consider the range of β values for which convergence was achieved. As also seen in Table 5, the behavior of L 2 in Cases 2 and 3 reveals clear thresholds that distinguish stable training regimes. As with the 25-bar space truss example, Case 2 and Case 3 under β = 0.01 again exhibit a failure to converge during training. The numerical results presented in Table 5 are visually confirmed in Figure 8c, and the loss function profile in Figure 8d also reveals a significant degradation in training performance.
The training results of the F θ d ( i ) model for this example are summarized in Table 6. The lowest L 2 error was achieved with β = 0.01 for Case 1, and β = 0.0001 for both Case 2 and Case 3. Similar to the results for F θ t ( i ) , Cases 2 and 3 exhibited reduced accuracy and convergence when larger β values were used. In contrast, Case 1 achieved better results with a higher β .
These trends are also evident in Figure 9, which shows the changes in total loss L d and L 2 over epochs for the F θ d ( i ) model. As shown in Figure 9a, β = 0.01 in Cases 2 and 3 did not lead to convergence even as the number of epochs increased. However, β = 0.0001 led to convergence in all cases, with Case 2 showing the best convergence and precision. The effects of the Lagrangian term in Cases 2 and 3 can be observed in Figure 9b. As illustrated, the loss functions for Cases 2 and 3 exhibit significant variations in the early training stages, and as these fluctuations diminish, the L 2 metric converges to a stable range, as shown in Figure 9a. By contrast, the loss function of Case 1 shows relatively little variation during the initial phase, and the behavior of L 2 in Case 3 closely resembles the results observed in Section 5.1.
Figure 9c visually presents the L 2 errors for Case 2 and Case 3 under β = 0.01 , as listed in Table 6. As seen in Figure 9a, the L 2 variation in this case shows a distinct pattern compared to other configurations, which is also reflected in the behavior of the loss function shown in Figure 9d. This supports the observation made in the 25-bar space truss example, where using a Lagrangian loss function requires careful selection of a small β value to ensure training stability.

5.3. Double-Layer Grid Dome: 14-by-14 Grid

This example concerns a double-layered grid-dome model, in which a square grid space frame forms a curved surface. In this case, the multi-index domain and separated domain strategies introduced in Section 4 were applied to train the neural network, and the results were analyzed accordingly. The geometry of the structure is illustrated in Figure 10, which represents a catenoid-shaped dome composed of a 14 × 14 double-layer grid. The design parameters were similar to those in the example discussed in Section 5.2, with a unit grid spacing of D = 5.0 m and height difference between the upper and lower layers, defined as H = 0.5 D 2 = 3.5355 m .
As shown in Figure 10b, the plan layout is consistent with that of a space frame, and the Z coordinates of the upper layer nodes are determined according to the following radial catenoid surface:
Z = a cosh R a 1
where R = ( X x 0 ) 2 + ( Y y 0 ) 2 is the radial distance from the center ( x 0 , y 0 ) . For a better understanding of the geometry, Table 7 lists the nodal coordinates of both the apex and boundary nodes of the upper layer. The target model consists of 1568 members and 421 nodes, including four boundary nodes, steel with a density of ρ = 7860 kg / m 3 and a Young’s modulus E = 2.0 × 10 11 N / m 2 . All members were assigned the same cross-sectional area ( A = 10 cm 2 ) and material property. A uniformly distributed external load of 30 kN in the z direction is applied to the upper layer nodes.
The IBNN model applied in this example is F θ t , which predicts the axial force t in each member. The model architectures included a single-index, multi-index domain, separated domain, and a hybrid of these configurations, as shown in Figure 3. The basic single-index domain structure that follows F θ t ( i ) : N + R consists of two hidden layers, each with 100 neurons. In the case of a multi-index domain, each index corresponds to n components, making the function F θ t ( i ) : N + R m , which naturally results in a greater number of model parameters. In the separated-domain approach, a shearing layer is applied to the input data, producing n independent subnetworks, F θ ( n ) that are trained separately. The hybrid model combines both multi-index and separated-domain structures, thereby having the highest number of neural network parameters.
In this example, the purpose of the model setup is not to maximize the prediction accuracy, but to observe and understand the conceptual and structural differences between various IBNN configurations from a macroscopic perspective. Thus, this study aimed to examine the effects of the transition from single-index to multi-index and separated-domain architectures under relatively low-precision training conditions. All networks were trained using the Adam optimizer with a learning rate of η i b n n = 0.001 and a maximum of 200,000 epochs. The loss functions considered included Case 1 ( L θ β ), Case 2 ( L θ λ ), and Case 3 ( L θ λ , β ), where the penalty parameter was set to β = 1.0 × 10 5 .
Table 8 summarizes the analysis results for the dome example described above. In all three cases (Cases 1, 2, and 3), the single-index model failed to yield accurate predictions. By contrast, both the multi-index and separated-domain models achieved significantly better accuracy than the single-index model. The hybrid model, which combines both approaches, produced the best overall results. This outcome is expected, as the number of neural network parameters increases with model complexity. Across all cases, the trends in L 2 and R 2 values were consistent. Notably, the multi-index model, despite not being composed of fully independent subnetworks such as the separated-domain model, showed similar training performance. Therefore, although both approaches offer high precision, using a single network with a multi-index structure may be more efficient than training multiple independent networks.
The variations in L 2 error and loss function related to the learning accuracy are shown in Figure 11 for Case 1, Figure 12 for Case 2, and Figure 13 for Case 3.
In Case 1, as illustrated in Figure 11, the single-index model exhibited poor training performance, whereas the multi-index model achieved lower error as the number of epochs increased. The separate-domain model eventually results in better L 2 accuracy and converges faster. However, the hybrid model showed rapid early convergence and reached the best value. However, as the training continues, the L 2 improvement becomes marginal, indicating inefficiency in the learning process.
In Case 2, as shown in Figure 12, trends similar to those in Case 1 were observed. However, the separate-domain model demonstrated a better convergence behavior. Except for the initially strong performance of the hybrid model, its efficiency deteriorated over time. Case 3, shown in Figure 13, displays nearly identical behavior to Case 2. The separate-domain approach outperformed the multi-index model, and although the hybrid model was less efficient, it provided high precision in the early stages. Notably, because Cases 2 and 3 are based on Lagrangian terms, large fluctuations are observed in the initial loss function behavior (Figure 12b and Figure 13b), which do not appear in Case 1, as shown in Figure 11b.
The best overall result in each case was achieved by the hybrid model using the augmented Lagrangian method in Case 3, where a rapid convergence was observed. However, the separate-domain approach appears to be more efficient, apart from the early convergence, as shown in the L 2 error progression in Figure 13. This trend was also observed in Case 2, indicating that similar outcomes can be expected when using Lagrangian-based formulations. As demonstrated in the example, the augmented Lagrangian method yielded a superior analytical performance.

5.4. Radial Flow-like Truss Dome

In this section, a numerical experiment was conducted on a radial flow-like truss dome structure comprising 1545 nodes and 5501 members to evaluate the applicability of the proposed Index-Based Neural Network (IBNN) framework to large-scale spatial truss systems and assess its scalability. This example is based on the roof dome structure studied by Shon et al. [71], and its geometry is illustrated in Figure 14 with an approximate span of 180 m in the longitudinal direction and 150 m in the transverse direction. The height difference between both ends of the longer span was approximately 3 m.
The structural system consists of a central compressive ring, an outer tensile ring, flow trusses connecting the two rings radially, and flow shells spanning between adjacent flow trusses. The compressive ring encircles the inner grid, and the two components collectively constitute a radial truss dome structure that expands outward from the inner region to the outer ring.
The material properties of the structure were as follows: density ρ = 7860   kg / m 3 , Young’s modulus E = 2.0 × 10 11 N / m 2 , and Poisson’s ratio ν = 0.3 . The applied loads included finishing loads of 0.35 kN / m 2 , live loads of 0.6 kN / m 2 , snow loads of 0.5 kN / m 2 , and self-weight. These were converted into nodal loads applied to 1425 nodes, excluding boundary nodes. The cross-sectional areas and nodal coordinates were provided in Shon et al. [71].
To verify the scalability and accuracy of the proposed IBNN framework, a radial flow-like truss dome example is analyzed using the multi-index mapping structure shown in Figure 3b. The neural network consisted of two hidden layers with 300 neurons each, and training was performed for up to 200,000 epochs. The analysis was performed in a CPU-based environment using the Adam optimizer and hyperbolic tangent (tanh) activation function. Because of the scale of the system, it is impractical to present a complete list of members and nodes. The entire structure of the model is illustrated in Figure 14, which includes a 3D perspective, a plan view, and key structural features.
The analysis results are summarized in Table 9, which confirm that the overall prediction accuracy consistently improves as the number of multi-index mappings increases. Notably, the L 2 error decreased sharply starting from the model with 57 mappings, recording very low errors of 9.25714 × 10 4 for 75 mappings and 6.10446 × 10 5 for 95 mappings. Furthermore, improvements in the accuracy were observed in the L metric. The coefficient of determination R 2 approaches 1.0 for most cases once the number of mappings exceeds 25, after which the rate of improvement becomes marginal. This trend is also observed in Figure 15, which shows changes in the L 2 error and loss function values. As the number of mappings increases, the precision improves; however, after a certain point, early convergence is observed, along with a sharp rise in the loss function during the early stages of training.
In conclusion, the multi-mapping strategy proved to be effective at enhancing the expressiveness and generalization performance of the IBNN model, empirically demonstrating its capability for accurate response prediction in large-scale truss structures. Moreover, this example highlights the effectiveness of the proposed IBNN framework input simplification and multi-index mapping strategy in terms of the training stability and analytical precision for large-scale discrete systems.

6. Conclusions

In this study, an Index-Based Neural Network (IBNN) framework is proposed as a novel neural-network-based surrogate model for static analysis of truss structures. Unlike conventional coordinate-based continuous domain approaches, the IBNN performs neural network training in a discrete domain defined by member and nodal indices. This enables an effective reduction in input dimensional complexity, a more intuitive representation of topological and connectivity information, and ensures model generality, scalability, and applicability across diverse structural analysis problems.
The IBNN constructs independent neural networks for predicting bar-force t and nodal displacement d , respectively, and incorporates mechanics-informed constraints derived from the force method and displacement method into the loss function using the Augmented Lagrangian Method (ALM). Compared to the conventional penalty method, the ALM-based loss design demonstrated superior training stability and convergence speed performance. Using penalty terms and Lagrangian multipliers alleviated parameter sensitivity and enhanced convergence stability. This approach offers a novel mechanism for effectively embedding physical constraints within neural network training.
The numerical examples for both single and multiple domain cases strongly supported the effectiveness of the proposed framework. In large-scale problems such as the radial flow-like truss dome, the L 2 error remained as low as 10 6 even when the number of separated domains reached 95. In particular, the separated domain configurations exhibited significantly improved training speed and prediction accuracy. Furthermore, the analysis of training stability with respect to varying β values revealed that divergence occurred when the Lagrangian term exceeded β = 0.01 , while low β settings were numerically confirmed to be essential for stable convergence of the loss function. These results empirically demonstrate that the proposed framework maintains high accuracy and reliability even for large-scale discrete structural systems.
The main contributions of this study are summarized as follows:
  • The IBNN framework achieved input dimensionality reduction and flexible scalability for multi-domain problems by utilizing an index-based discrete domain instead of a coordinate-based continuous domain.
  • A mechanics-informed loss function based on the Augmented Lagrangian Method (ALM) ensured consistency and training stability in predicting both bar force and nodal displacement.
  • Parametric analysis revealed that the choice of β significantly affects stability, with lower β values leading to more stable convergence.
  • The applicability and scalability of the framework were validated through various large-scale case studies, including a double-layer grid dome (14-by-14 grid) and a radial flow-like truss dome.
In particular, the proposed IBNN offers a novel and distinctive approach by directly handling member and nodal indices within a discrete domain model, unlike conventional PINN or GNN frameworks that rely on coordinate-based continuum analysis or graph-topology dependencies. This enables a more efficient representation of structural regularity and topological information in truss analysis. As such, it may serve as a foundational model for neural network-based analysis of large-scale or complex structural systems in the future.
Future research will aim to (1) broaden the general applicability of the framework by extending and applying it to complex engineering problems such as geometric and material nonlinearity, dynamic behavior, damage estimation, and structural optimization, and (2) conduct performance comparisons and benchmarking analyses with state-of-the-art neural network models, including PINNs, GNNs, and QNNs, to comprehensively evaluate the strengths and limitations of the proposed framework.
This study presented a novel analytical paradigm that captures the discreteness and regularity of truss structures while simultaneously achieving the efficiency and flexibility of neural network-based surrogate modeling. Through this approach, the study made a significant contribution to expanding the potential of neural network-based analysis in the fields of structural engineering and computational mechanics.

Author Contributions

Conceptualization, S.S.; methodology, S.S. and H.H.; software, S.S.; validation, S.S. and H.H.; formal analysis, S.S. and H.H.; investigation, S.S. and H.H.; resources, S.S. and H.H.; data curation, S.S.; writing—original draft preparation, S.S.; writing—review and editing, H.H. and S.S.; visualization, H.H. and S.S.; supervision, S.S.; project administration, S.S.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (RS-2023-00248809 and RS-2024-00413824), and by the NRF grant funded by the Ministry of Science and ICT (RS-2024-00352968).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nourian, N.; El-Badry, M.; Jamshidi, M. Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model. Algorithms 2023, 16, 380. [Google Scholar] [CrossRef]
  2. Pellegrino, S.; Calladine, C. Matrix analysis of statically and kinematically indeterminate frameworks. Int. J. Solids Struct. 1986, 22, 409–428. [Google Scholar] [CrossRef]
  3. Patnaik, S.; Berke, L.; Gallagher, R. Integrated force method versus displacement method for finite element analysis. Comput. Struct. 1991, 38, 377–407. [Google Scholar] [CrossRef]
  4. Ohkubo, S.; Watada, Y.; Toshio, F. Nonlinear analysis of truss by energy minimization. Comput. Struct. 1987, 27, 129–145. [Google Scholar] [CrossRef]
  5. Toklu, Y. Nonlinear analysis of trusses through energy minimization. Comput. Struct. 2004, 82, 1581–1589. [Google Scholar] [CrossRef]
  6. Chi Tran, H.; Lee, J. Force methods for trusses with elastic boundary conditions. Int. J. Mech. Sci. 2013, 66, 202–213. [Google Scholar] [CrossRef]
  7. Luo, Y.; Lu, J. Geometrically non-linear force method for assemblies with infinitesimal mechanisms. Comput. Struct. 2006, 84, 2194–2199. [Google Scholar] [CrossRef]
  8. Wang, Y.; Senatore, G. Extended integrated force method for the analysis of prestress-stable statically and kinematically indeterminate structures. Int. J. Solids Struct. 2020, 202, 798–815. [Google Scholar] [CrossRef]
  9. Shon, S.; Lee, S.; Ha, J.; Cho, C. Semi-Analytic Solution and Stability of a Space Truss Using a High-Order Taylor Series Method. Materials 2015, 8, 2400–2414. [Google Scholar] [CrossRef]
  10. Yang, Q.; Li, H.; Zhang, L.; Guo, K.; Li, K. Nonlinear flutter in a wind-excited double-deck truss girder bridge: Experimental investigation and modeling approach. Nonlinear Dyn. 2025, 113, 6427–6445. [Google Scholar] [CrossRef]
  11. Deng, H.; Kwan, A. Unified classification of stability of pin-jointed bar assemblies. Int. J. Solids Struct. 2005, 42, 4393–4413. [Google Scholar] [CrossRef]
  12. Kaveh, A.; Zolghadr, A. Topology optimization of trusses considering static and dynamic constraints using the CSS. Appl. Soft Comput. 2013, 13, 2727–2734. [Google Scholar] [CrossRef]
  13. Lee, D.; Shon, S.; Lee, S.; Ha, J. Size and Topology Optimization of Truss Structures Using Quantum-Based HS Algorithm. Buildings 2023, 13, 1436. [Google Scholar] [CrossRef]
  14. Saeed, N.M.; Manguri, A.A.; Szczepanski, M.; Jankowski, R.; Haydar, B.A. Static Shape and Stress Control of Trusses with Optimum Time, Actuators and Actuation. Int. J. Civ. Eng. 2023, 21, 379–390. [Google Scholar] [CrossRef]
  15. Saeed, N.M.; Kwan, A.S.K. Simultaneous Displacement and Internal Force Prescription in Shape Control of Pin-Jointed Assemblies. AIAA J. 2016, 54, 2499–2506. [Google Scholar] [CrossRef]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
  17. Bianco, M.J.; Gerstoft, P.; Traer, J.; Ozanich, E.; Roch, M.A.; Gannot, S.; Deledalle, C.A. Machine learning in acoustics: Theory and applications. J. Acoust. Soc. Am. 2019, 146, 3590–3628. [Google Scholar] [CrossRef]
  18. Lee, S.Y.; Chang, J.; Lee, S. Deep Learning-Enabled High-Resolution and Fast Sound Source Localization in Spherical Microphone Array System. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  19. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  20. Montáns, F.J.; Chinesta, F.; Gómez-Bombarelli, R.; Kutz, J.N. Data-driven modeling and learning in science and engineering. Comptes Rendus Mec. 2019, 347, 845–855. [Google Scholar] [CrossRef]
  21. Lee, S.; Kim, T.; Lieu, Q.X.; Vo, T.P.; Lee, J. A novel data-driven analysis for sequentially formulated plastic hinges of steel frames. Comput. Struct. 2023, 281, 107031. [Google Scholar] [CrossRef]
  22. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  23. Yuan, F.G.; Zargar, S.A.; Chen, Q.; Wang, S. Machine learning for structural health monitoring: Challenges and opportunities. In Proceedings of the Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2020, Online, 27 April–8 May 2020; Volume 11379, p. 1137903. [Google Scholar] [CrossRef]
  24. Blechschmidt, J.; Ernst, O.G. Three ways to solve partial differential equations with neural networks—A review. GAMM Mitteilungen 2021, 44, e202100006. [Google Scholar] [CrossRef]
  25. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  26. Hao, Z.; Liu, S.; Zhang, Y.; Ying, C.; Feng, Y.; Su, H.; Zhu, J. Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications. arXiv 2022, arXiv:2211.08064. [Google Scholar] [CrossRef]
  27. Xu, R.; Zhang, D.; Rong, M.; Wang, N. Weak form theory-guided neural network (TgNN-wf) for deep learning of subsurface single- and two-phase flow. J. Comput. Phys. 2021, 436, 110318. [Google Scholar] [CrossRef]
  28. Samaniego, E.; Anitescu, C.; Goswami, S.; Nguyen-Thanh, V.; Guo, H.; Hamdia, K.; Zhuang, X.; Rabczuk, T. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Comput. Methods Appl. Mech. Eng. 2020, 362, 112790. [Google Scholar] [CrossRef]
  29. Li, W.; Bazant, M.Z.; Zhu, J. A physics-guided neural network framework for elastic plates: Comparison of governing equations-based and energy-based approaches. Comput. Methods Appl. Mech. Eng. 2021, 383, 113933. [Google Scholar] [CrossRef]
  30. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  31. Luong, K.A.; Le-Duc, T.; Lee, J. Automatically imposing boundary conditions for boundary value problems by unified physics-informed neural network. Eng. Comput. 2024, 40, 1717–1739. [Google Scholar] [CrossRef]
  32. Cho, J.; Nam, S.; Yang, H.; Yun, S.B.; Hong, Y.; Park, E. Separable Physics-Informed Neural Networks. arXiv 2023, arXiv:2306.15969. [Google Scholar] [CrossRef]
  33. Es’kin, V.A.; Davydov, D.V.; Gur’eva, J.V.; Malkhanov, A.O.; Smorkalov, M.E. Separable Physics-Informed Neural Networks for the solution of elasticity problems. arXiv 2024, arXiv:2401.13486. [Google Scholar] [CrossRef]
  34. Al-Adly, A.I.F.; Kripakaran, P. Physics-informed neural networks for structural health monitoring: A case study for Kirchhoff–Love plates. Data-Centric Eng. 2024, 5, e6. [Google Scholar] [CrossRef]
  35. Kalimullah, N.M.; Shelke, A.; Habib, A. A probabilistic framework for source localization in anisotropic composite using transfer learning based multi-fidelity physics informed neural network (mfPINN). Mech. Syst. Signal Process. 2023, 197, 110360. [Google Scholar] [CrossRef]
  36. Liu, T.; Meidani, H. Physics-Informed Neural Networks for System Identification of Structural Systems with a Multiphysics Damping Model. J. Eng. Mech. 2023, 149, 04023079. [Google Scholar] [CrossRef]
  37. Yang, L.H.; Luo, X.L.; Yang, Z.B.; Nan, C.F.; Chen, X.F.; Sun, Y. FE reduced-order model-informed neural operator for structural dynamic response prediction. Neural Netw. 2025, 188, 107437. [Google Scholar] [CrossRef]
  38. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  39. Khalid, S.; Yazdani, M.H.; Azad, M.M.; Elahi, M.U.; Raouf, I.; Kim, H.S. Advancements in Physics-Informed Neural Networks for Laminated Composites: A Comprehensive Review. Mathematics 2025, 13, 17. [Google Scholar] [CrossRef]
  40. Katsikis, D.; Muradova, A.D.; Stavroulakis, G.E. A Gentle Introduction to Physics-Informed Neural Networks, with Applications in Static Rod and Beam Problems. J. Adv. Appl. Amp; Comput. Math. 2022, 9, 103–128. [Google Scholar] [CrossRef]
  41. Bazmara, M.; Silani, M.; Mianroodi, M.; sheibanian, M. Physics-informed neural networks for nonlinear bending of 3D functionally graded beam. Structures 2023, 49, 152–162. [Google Scholar] [CrossRef]
  42. Hong, W.K.; Pham, T.D. An AI-based auto-design for optimizing RC frames using the ANN-based Hong–Lagrange algorithm. J. Asian Archit. Build. Eng. 2023, 22, 2876–2888. [Google Scholar] [CrossRef]
  43. Le-Duc, T.; Nguyen-Xuan, H.; Lee, J. A finite-element-informed neural network for parametric simulation in structural mechanics. Finite Elem. Anal. Des. 2023, 217, 103904. [Google Scholar] [CrossRef]
  44. Hajela, P.; Berke, L. Neural network based decomposition in optimal structural synthesis. Comput. Syst. Eng. 1991, 2, 473–481. [Google Scholar] [CrossRef]
  45. Hajela, P.; Berke, L. Neurobiological computational models in structural analysis and design. Comput. Struct. 1991, 41, 657–667. [Google Scholar] [CrossRef]
  46. Berke, L.; Hajela, P. Applications of artificial neural nets in structural mechanics. Struct. Optim. 1992, 4, 90–98. [Google Scholar] [CrossRef]
  47. Papadrakakis, M.; Lagaros, N.D.; Tsompanakis, Y. Optimization of Large-Scale 3-D Trusses Using Evolution Strategies and Neural Network. Int. J. Space Struct. 1999, 14, 211–223. [Google Scholar] [CrossRef]
  48. Nguyen, T.H.; Vu, A.T. Using neural networks as surrogate models in differential evolution optimization of truss structures. In Proceedings of the Computational Collective Intelligence: 12th International Conference, ICCCI 2020, Da Nang, Vietnam, 30 November–3 December 2020; Proceedings 12. Springer: Berlin/Heidelberg, Germany, 2020; pp. 152–163. [Google Scholar] [CrossRef]
  49. Mai, H.T.; Kang, J.; Lee, J. A machine learning-based surrogate model for optimization of truss structures with geometrically nonlinear behavior. Finite Elem. Anal. Des. 2021, 196, 103572. [Google Scholar] [CrossRef]
  50. Nguyen, T.H.; Vu, A.T. Speeding up Composite Differential Evolution for structural optimization using neural networks. J. Inf. Telecommun. 2022, 6, 101–120. [Google Scholar] [CrossRef]
  51. Liu, Y.; Lu, N.; Noori, M.; Yin, X. System reliability-based optimisation for truss structures using genetic algorithm and neural network. Int. J. Reliab. Saf. 2014, 8, 51–69. [Google Scholar] [CrossRef]
  52. Tashakori, A.; Adeli, H. Optimum design of cold-formed steel space structures using neural dynamics model. J. Constr. Steel Res. 2002, 58, 1545–1566. [Google Scholar] [CrossRef]
  53. Moghadas, R.K.; Choong, K.K.; Mohd, S.B. Prediction of Optimal Design and Deflection of Space Structures Using Neural Networks. Math. Probl. Eng. 2012, 2012, 712974. [Google Scholar] [CrossRef]
  54. Nguyen, T.H.; Vu, A.T. Prediction of Optimal Cross-Sectional Areas of Truss Structures Using Artificial Neural Networks. In Proceedings of the CIGOS 2021, Emerging Technologies and Applications for Green Infrastructure: Proceedings of the 6th International Conference on Geotechnics, Civil Engineering and Structures, Ha Long, Vietnam, 28–29 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1897–1905. [Google Scholar] [CrossRef]
  55. Mai, H.T.; Lieu, Q.X.; Kang, J.; Lee, J. A novel deep unsupervised learning-based framework for optimization of truss structures. Eng. Comput. 2023, 39, 2585–2608. [Google Scholar] [CrossRef]
  56. Mai, H.T.; Lee, S.; Kim, D.; Lee, J.; Kang, J.; Lee, J. Optimum design of nonlinear structures via deep neural network-based parameterization framework. Eur. J. Mech.-A/Solids 2023, 98, 104869. [Google Scholar] [CrossRef]
  57. Mai, H.T.; Mai, D.D.; Kang, J.; Lee, J.; Lee, J. Physics-informed neural energy-force network: A unified solver-free numerical simulation for structural optimization. Eng. Comput. 2024, 40, 147–170. [Google Scholar] [CrossRef]
  58. Alam, J.; Berke, L. Application of Artificial Neural Networks in Nonlinear Analysis of Trusses; NASA Technical Memorandum; Lewis Research Center: Cleveland, OH, USA, 1991. [Google Scholar]
  59. Kaveh, A.; Dehkordi, M. Neural Networks for the Analysis and Design of Domes. Int. J. Space Struct. 2003, 18, 181–193. [Google Scholar] [CrossRef]
  60. Nguyen, T.H.; Vu, A.T. Evaluating structural safety of trusses using Machine Learning. Frat. Integrità Strutt. 2021, 15, 308–318. [Google Scholar] [CrossRef]
  61. Lieu, Q.X.; Nguyen, K.T.; Dang, K.D.; Lee, S.; Kang, J.; Lee, J. An adaptive surrogate model to structural reliability analysis using deep neural network. Expert Syst. Appl. 2022, 189, 116104. [Google Scholar] [CrossRef]
  62. Mai, H.T.; Lieu, Q.X.; Kang, J.; Lee, J. A robust unsupervised neural network framework for geometrically nonlinear analysis of inelastic truss structures. Appl. Math. Model. 2022, 107, 332–352. [Google Scholar] [CrossRef]
  63. Mai, H.T.; Truong, T.T.; Kang, J.; Mai, D.D.; Lee, J. A robust physics-informed neural network approach for predicting structural instability. Finite Elem. Anal. Des. 2023, 216, 103893. [Google Scholar] [CrossRef]
  64. Lee, S.; Park, S.; Kim, T.; Lieu, Q.X.; Lee, J. Damage quantification in truss structures by limited sensor-based surrogate model. Appl. Acoust. 2021, 172, 107547. [Google Scholar] [CrossRef]
  65. Lieu, Q.X. A deep neural network-assisted metamodel for damage detection of trusses using incomplete time-series acceleration. Expert Syst. Appl. 2023, 233, 120967. [Google Scholar] [CrossRef]
  66. Mai, H.T.; Lee, S.; Kang, J.; Lee, J. A damage-informed neural network framework for structural damage identification. Comput. Struct. 2024, 292, 107232. [Google Scholar] [CrossRef]
  67. Pellegrino, S. Structural computations with the singular value decomposition of the equilibrium matrix. Int. J. Solids Struct. 1993, 30, 3025–3035. [Google Scholar] [CrossRef]
  68. Ziegler, F. Computational aspects of structural shape control. Comput. Struct. 2005, 83, 1191–1204. [Google Scholar] [CrossRef]
  69. Pellegrino, S. Analysis of prestressed mechanisms. Int. J. Solids Struct. 1990, 26, 1329–1350. [Google Scholar] [CrossRef]
  70. Hwang, H.J.; Son, H. Lagrangian dual framework for conservative neural network solutions of kinetic equations. Kinet. Relat. Model. 2022, 15, 551–568. [Google Scholar] [CrossRef]
  71. Shon, S.; Kim, S.D.; Kang, M.M. A Study of Unstable Phenomenon of Flow Truss Dome Structure with Asymmetric Load Modes. J. Korean Assoc. Spat. Struct. 2002, 2, 61–76. [Google Scholar]
Figure 1. Configurations of the 6-bar plane truss: (a) plan view in the xy plane, (b) index mapping in the ij plane.
Figure 1. Configurations of the 6-bar plane truss: (a) plan view in the xy plane, (b) index mapping in the ij plane.
Buildings 15 01753 g001
Figure 2. Conceptual diagram illustrating the transformation of spatial domain data into the index domain in the proposed IBNN framework.
Figure 2. Conceptual diagram illustrating the transformation of spatial domain data into the index domain in the proposed IBNN framework.
Buildings 15 01753 g002
Figure 3. Conceptual architecture of the proposed IBNN framework: (a) single-index domain, (b) multi-domain, (c) separated domain, and (d) detailed schematic of the index-based input–output structure and the computation of mechanics-informed constraints via Lagrangian multipliers under the force method formulation.
Figure 3. Conceptual architecture of the proposed IBNN framework: (a) single-index domain, (b) multi-domain, (c) separated domain, and (d) detailed schematic of the index-based input–output structure and the computation of mechanics-informed constraints via Lagrangian multipliers under the force method formulation.
Buildings 15 01753 g003
Figure 4. Configuration of the 25-bar space truss: 3D view.
Figure 4. Configuration of the 25-bar space truss: 3D view.
Buildings 15 01753 g004
Figure 5. Error and loss comparison for Cases 1, 2 and 3 in the 25-bar space truss ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), cases 2–3, β = 0.01 , (d) total loss L t , Case 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Figure 5. Error and loss comparison for Cases 1, 2 and 3 in the 25-bar space truss ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), cases 2–3, β = 0.01 , (d) total loss L t , Case 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Buildings 15 01753 g005
Figure 6. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) Total loss function L t , (c) L 2 (MSE), Case 2–3, β = 0.01 , (d) Total loss L t , Case 2–3, β = 0.01 . y-axis of figures (a,b,d) is in a log scale.
Figure 6. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) Total loss function L t , (c) L 2 (MSE), Case 2–3, β = 0.01 , (d) Total loss L t , Case 2–3, β = 0.01 . y-axis of figures (a,b,d) is in a log scale.
Buildings 15 01753 g006
Figure 7. Configurations of the 3-by-3 space frame structures: (a) plan view, (b) perspective view.
Figure 7. Configurations of the 3-by-3 space frame structures: (a) plan view, (b) perspective view.
Buildings 15 01753 g007
Figure 8. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), Cases 2–3, β = 0.01 , (d) total loss L t , Cases 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Figure 8. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( t i = F θ t ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), Cases 2–3, β = 0.01 , (d) total loss L t , Cases 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Buildings 15 01753 g008
Figure 9. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( d i = F θ d ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), Case 2–3, β = 0.01 , (d) total loss L t , Cases 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Figure 9. Error and loss comparison for Cases 1, 2 and 3 in the 3-by-3 square grid space frame ( d i = F θ d ( i ) ): (a) L 2 (MSE), (b) total loss function L t , (c) L 2 (MSE), Case 2–3, β = 0.01 , (d) total loss L t , Cases 2–3, β = 0.01 . The y-axis of figures (a,b,d) is in a log scale.
Buildings 15 01753 g009
Figure 10. Configurations of the double-layered grid dome (14-by-14): (a) plan view, (b) perspective view.
Figure 10. Configurations of the double-layered grid dome (14-by-14): (a) plan view, (b) perspective view.
Buildings 15 01753 g010aBuildings 15 01753 g010b
Figure 11. Error and loss comparison for index mapping type (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and penalty method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Figure 11. Error and loss comparison for index mapping type (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and penalty method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Buildings 15 01753 g011
Figure 12. Error and loss comparison for index mapping type (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and Lagrangian method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Figure 12. Error and loss comparison for index mapping type (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and Lagrangian method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Buildings 15 01753 g012
Figure 13. Error and loss comparison for index mapping type: (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and augmented Lagrangian method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Figure 13. Error and loss comparison for index mapping type: (6-by-6 square grid space frame t i = F θ t ( i ) , β = 0.00001 and augmented Lagrangian method): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Buildings 15 01753 g013
Figure 14. Configurations of the radial flow-like truss dome (see [71]): (a) perspective view, (b) plan view and model components.
Figure 14. Configurations of the radial flow-like truss dome (see [71]): (a) perspective view, (b) plan view and model components.
Buildings 15 01753 g014
Figure 15. Error and loss comparison of multi-domain indexing (radial flow-like truss dome d i = F θ d ( i ) ): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Figure 15. Error and loss comparison of multi-domain indexing (radial flow-like truss dome d i = F θ d ( i ) ): (a) L 2 (MSE), (b) total loss function L t . The y-axis of the figures is in a log scale.
Buildings 15 01753 g015
Table 1. Comparison of total potential energy for the 6-bar plane truss obtained using coordinate-based and index-based input representations.
Table 1. Comparison of total potential energy for the 6-bar plane truss obtained using coordinate-based and index-based input representations.
Type of Input DomainRef. [62]Coordinate (X,Y)Index (i,j)
Potential energy Π p (N·m)−1061.228−1061.4184543−1061.4184543
Table 2. Configuration of loss functions for IBNN training.
Table 2. Configuration of loss functions for IBNN training.
CaseLoss Function
Case 1 (Quadratic Penalty) L t β = L r + L q β
Case 2 (Lagrangian) L t λ = L r + L L λ
Case 3 (Augmented Lagrangian) L t λ , β = L r + L L λ + L q β
Table 3. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 0.01 (25-bar space truss t i = F θ t ( i ) ).
Table 3. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 0.01 (25-bar space truss t i = F θ t ( i ) ).
Case β L L 2 (MSE) R L 2 (RMSE) L R 2
Case 1: L L t β
0.0001 2.31636 × 10 5 5.79009 × 10 4 2.40626 × 10 2 1.52660 × 10 2 9.99520 × 10 1
0.001 2.02718 × 10 5 5.06028 × 10 4 2.24951 × 10 2 1.41377 × 10 2 9.99580 × 10 1
0.01 2.38054 × 10 5 5.86750 × 10 4 2.42229 × 10 2 1.53916 × 10 2 9.99514 × 10 1
Case 2: L L t λ , η β
0.0001 2.49940 × 10 5 4.99587 × 10 4 2.23514 × 10 2 1.13594 × 10 2 9.99586 × 10 1
0.001 2.31281 × 10 5 4.92399 × 10 4 2.21901 × 10 2 1.23214 × 10 2 9.99592 × 10 1
0.01 7.43076 × 10 1 4.64649 × 10 1 6.81652 × 10 1 4.42321 × 10 1 6.14778 × 10 1
Case 3: L L t λ , β
0.0001 9.55876 × 10 8 2.22486 × 10 6 1.49160 × 10 3 1.03544 × 10 3 9.99998 × 10 1
0.001 2.38382 × 10 5 4.92519 × 10 4 2.21928 × 10 2 1.19409 × 10 2 9.99592 × 10 1
0.01 3.68786 × 10 1 6.50534 × 10 1 8.06557 × 10 1 4.33099 × 10 1 4.60668 × 10 1
Table 4. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼ 0.01 (25-bar space truss d i = F θ d ( i ) ).
Table 4. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼ 0.01 (25-bar space truss d i = F θ d ( i ) ).
Case β L L 2 (MSE) R L 2 (RMSE) L R 2
Case 1: L L t β
0.0001 5.84719 × 10 11 1.05227 × 10 9 3.24388 × 10 5 2.47268 × 10 5 1.00000 × 10 0
0.001 7.60325 × 10 12 1.36563 × 10 10 1.16860 × 10 5 9.07090 × 10 6 1.00000 × 10 0
0.01 4.18516 × 10 11 7.35574 × 10 10 2.71215 × 10 5 1.79663 × 10 5 1.00000 × 10 0
Case 2: L L t λ , η β
0.0001 1.33499 × 10 10 4.55151 × 10 9 6.74649 × 10 5 3.94694 × 10 5 1.00000 × 10 0
0.001 9.10703 × 10 11 1.23038 × 10 9 3.50767 × 10 5 1.56834 × 10 5 1.00000 × 10 0
0.01 8.20824 × 10 4 1.47570 × 10 2 1.21478 × 10 1 9.29927 × 10 2 9.96223 × 10 1
Case 3: L L t λ , β
0.0001 1.48308 × 10 11 3.89398 × 10 9 6.24017 × 10 5 3.46187 × 10 5 1.00000 × 10 0
0.001 1.26217 × 10 10 1.40424 × 10 9 3.74731 × 10 5 1.66713 × 10 5 1.00000 × 10 0
0.01 1.64014 × 10 3 3.58583 × 10 2 1.89363 × 10 1 1.07014 × 10 1 9.90821 × 10 1
Table 5. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼0.01 (3-by-3 Square Grid Space Frame t i = F θ t ( i ) ).
Table 5. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼0.01 (3-by-3 Square Grid Space Frame t i = F θ t ( i ) ).
Case β L L 2 (MSE) R L 2 (RMSE) L R 2
Case 1: L L t β
0.0001 2.79503 × 10 3 2.01194 × 10 1 4.48547 × 10 1 2.28187 × 10 1 9.75396 × 10 1
0.001 2.78160 × 10 3 1.99798 × 10 1 4.46988 × 10 1 2.28508 × 10 1 9.75567 × 10 1
0.01 2.81575 × 10 3 1.97994 × 10 1 4.44965 × 10 1 2.25048 × 10 1 9.75788 × 10 1
Case 2: L L t λ , η β
0.0001 5.13860 × 10 5 3.65850 × 10 3 6.04855 × 10 2 2.22224 × 10 2 9.99553 × 10 1
0.001 1.24127 × 10 3 8.67216 × 10 2 2.94485 × 10 1 1.36646 × 10 1 9.89395 × 10 1
0.01 1.97962 × 10 2 7.50997 × 10 0 2.74043 × 10 0 1.01318 × 10 0 8.16146 × 10 2
Case 3: L L t λ , β
0.0001 8.56828 × 10 5 1.00731 × 10 2 1.00365 × 10 1 7.13114 × 10 2 9.98768 × 10 1
0.001 2.06440 × 10 5 1.45082 × 10 3 3.80896 × 10 2 2.65121 × 10 2 9.99823 × 10 1
0.01 2.75332 × 10 2 7.54816 × 10 0 2.74739 × 10 0 9.90301 × 10 1 7.69451 × 10 2
Table 6. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼0.01 (3-by-3 square grid space frame d i = F θ d ( i ) ).
Table 6. Comparative analysis of IBNN performance in Cases 1, 2 and 3 by β = 0.0001 ∼0.01 (3-by-3 square grid space frame d i = F θ d ( i ) ).
Case β L L 2 (MSE) R L 2 (RMSE) L R 2
Case 1: L L t β
0.0001 8.31060 × 10 2 5.23534 × 10 0 2.28809 × 10 0 7.14368 × 10 1 5.52065 × 10 1
0.001 3.48068 × 10 2 2.18930 × 10 0 1.47963 × 10 0 7.43621 × 10 1 8.12684 × 10 1
0.01 3.87057 × 10 9 2.39956 × 10 7 4.89853 × 10 4 2.45839 × 10 4 1.00000 × 10 0
Case 2: L L t λ , η β
0.0001 8.90273 × 10 11 1.34995 × 10 8 1.16188 × 10 4 5.56834 × 10 5 1.00000 × 10 0
0.001 3.05118 × 10 2 1.82341 × 10 0 1.35034 × 10 0 9.21857 × 10 1 8.43989 × 10 1
0.01 9.91229 × 10 0 6.35640 × 10 0 2.52119 × 10 0 7.38646 × 10 1 4.56147 × 10 1
Case 3: L L t λ , β
0.0001 2.20790 × 10 9 1.65631 × 10 7 4.06978 × 10 4 2.00056 × 10 4 1.00000 × 10 0
0.001 2.28912 × 10 4 1.36984 × 10 2 1.17040 × 10 1 8.39807 × 10 2 9.98828 × 10 1
0.01 1.14573 × 10 0 8.13882 × 10 0 2.85286 × 10 0 7.59678 × 10 1 3.03644 × 10 1
Table 7. Nodal coordinates in the upper layer: the eight edge nodes and top node (unit: m ).
Table 7. Nodal coordinates in the upper layer: the eight edge nodes and top node (unit: m ).
No. NodexyzLocation
10.00.00.0lower-left
835.00.09.3076lower-middle
1570.00.00.0lower-right
1060.035.09.3076middle-left
12070.035.09.3076middle-right
11335.035.018.2414top node
2110.070.00.0upper-left
21835.070.09.3076upper-middle
22570.070.00.0upper-right
Table 8. Comparative analysis of IBNN performance in accordance with index mapping with β = 1.0 × 10 5 (double-layered grid dome (14-by-14) t i = F θ t ( i ) ).
Table 8. Comparative analysis of IBNN performance in accordance with index mapping with β = 1.0 × 10 5 (double-layered grid dome (14-by-14) t i = F θ t ( i ) ).
CaseIndex-Type L L 2 (MSE) R L 2 (RMSE) L R 2
Case 1: L L t β
single 5.05099 × 10 3 7.91991 × 10 0 2.81423 × 10 0 5.10985 × 10 1 6.11438 × 10 1
multi-index 2.12168 × 10 4 3.32675 × 10 1 5.76780 × 10 1 1.17295 × 10 1 9.83679 × 10 1
separate 2.10365 × 10 4 3.29848 × 10 1 5.74324 × 10 1 1.15007 × 10 1 9.83817 × 10 1
hybrid 1.13642 × 10 4 1.78187 × 10 1 4.22122 × 10 1 1.13074 × 10 1 9.91258 × 10 1
Case 2: L L t λ , η β
single 1.58201 × 10 2 1.75554 × 10 1 4.18991 × 10 0 9.30168 × 10 1 1.38709 × 10 1
multi-index 3.90745 × 10 4 5.05985 × 10 1 7.11326 × 10 1 1.18577 × 10 1 9.75176 × 10 1
separate 8.23573 × 10 5 1.35382 × 10 1 3.67943 × 10 1 7.11136 × 10 2 9.93358 × 10 1
hybrid 6.01567 × 10 5 7.82180 × 10 2 2.79675 × 10 1 7.01166 × 10 2 9.96163 × 10 1
Case 3: L L t λ , β
single 1.30849 × 10 2 1.61372 × 10 1 4.01711 × 10 0 9.62240 × 10 1 2.08287 × 10 1
multi-index 4.39831 × 10 4 5.57639 × 10 1 7.46753 × 10 1 1.70493 × 10 1 9.72641 × 10 1
separate 1.93004 × 10 5 3.82613 × 10 2 1.95605 × 10 1 4.12174 × 10 2 9.98123 × 10 1
hybrid 1.59289 × 10 5 1.87640 × 10 2 1.36982 × 10 1 3.84808 × 10 2 9.99079 × 10 1
Table 9. Comparative analysis of IBNN performance in Case 3 and β = 1.0 × 10 5 : (radial flow-like truss dome d i = F θ d ( i ) ).
Table 9. Comparative analysis of IBNN performance in Case 3 and β = 1.0 × 10 5 : (radial flow-like truss dome d i = F θ d ( i ) ).
CaseNo. of Index Mapping L L 2 (MSE) R L 2 (RMSE) L R 2
Case 3: L L t λ , β
1 6.55344 × 10 2 2.56843 × 10 2 1.60263 × 10 1 8.29171 × 10 1 1.13706 × 10 1
3 2.72644 × 10 2 5.96450 × 10 1 7.72302 × 10 0 5.10494 × 10 1 7.94182 × 10 1
5 6.61635 × 10 2 2.57076 × 10 2 1.60336 × 10 1 9.26819 × 10 1 1.12902 × 10 1
9 1.04632 × 10 2 3.25865 × 10 1 5.70846 × 10 0 4.64187 × 10 1 8.87553 × 10 1
19 1.96104 × 10 2 8.31951 × 10 1 9.12113 × 10 0 8.73170 × 10 1 7.12917 × 10 1
25 6.33708 × 10 4 2.70466 × 10 0 1.64458 × 10 0 3.42814 × 10 1 9.90667 × 10 1
45 4.55446 × 10 6 2.11012 × 10 2 1.45263 × 10 1 7.28820 × 10 3 9.99927 × 10 1
57 5.13090 × 10 6 2.31063 × 10 2 1.52008 × 10 1 8.00845 × 10 3 9.99920 × 10 1
75 1.93386 × 10 7 9.25714 × 10 4 3.04256 × 10 2 8.42162 × 10 3 9.99997 × 10 1
95 7.85187 × 10 9 6.10446 × 10 5 7.81310 × 10 3 6.43504 × 10 4 1.00000 × 10 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ha, H.; Shon, S.; Lee, S. Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach. Buildings 2025, 15, 1753. https://doi.org/10.3390/buildings15101753

AMA Style

Ha H, Shon S, Lee S. Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach. Buildings. 2025; 15(10):1753. https://doi.org/10.3390/buildings15101753

Chicago/Turabian Style

Ha, Hyeonju, Sudeok Shon, and Seungjae Lee. 2025. "Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach" Buildings 15, no. 10: 1753. https://doi.org/10.3390/buildings15101753

APA Style

Ha, H., Shon, S., & Lee, S. (2025). Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach. Buildings, 15(10), 1753. https://doi.org/10.3390/buildings15101753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop