Next Article in Journal
Using Patient-Specific 3D-Printed C1–C2 Interfacet Spacers for the Treatment of Type 1 Basilar Invagination: A Clinical Case Report
Previous Article in Journal
Transition Processes in Technological Systems: Inspiration from Processes in Biological Evolution
Previous Article in Special Issue
Machine Learning and Metaheuristics Approach for Individual Credit Risk Assessment: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints

School of Industrial Design & Architectural Engineering, Korea University of Technology & Education, 1600 Chungjeol-ro, Byeongcheon-myeon, Cheonan 31253, Republic of Korea
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 407; https://doi.org/10.3390/biomimetics10060407
Submission received: 9 April 2025 / Revised: 3 May 2025 / Accepted: 8 May 2025 / Published: 16 June 2025
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)

Abstract

:
This study proposes an index-based quantum neural network (QNN) model, built upon a variational quantum circuit (VQC), as a surrogate framework for the static analysis of truss structures. Unlike coordinate-based models, the proposed QNN uses discrete member and node indices as inputs, and it adopts a separate-domain strategy that partitions the structure for parallel training. This architecture reflects the way nature organizes and optimizes complex systems, thereby enhancing both flexibility and scalability. Independent quantum circuits are assigned to each separate domain, and a mechanics-informed loss function based on the force method is formulated within a Lagrangian dual framework to embed physical constraints directly into the training process. As a result, the model achieves high prediction accuracy and fast convergence, even under complex structural conditions with relatively few parameters. Numerical experiments on 2D and 3D truss structures show that the QNN reduces the number of parameters by up to 64% compared to conventional neural networks, while achieving higher accuracy. Even within the same QNN architecture, the separate-domain approach outperforms the single-domain model with a 6.25% reduction in parameters. The proposed index-based QNN model has demonstrated practical applicability for structural analysis and shows strong potential as a quantum-based numerical analysis tool for future applications in building structure optimization and broader engineering domains.

1. Introduction

Truss structures are widely used across various fields, such as architecture, civil engineering, mechanical engineering, and aerospace engineering. These structural systems are composed of straight members that are connected at joints and offer advantages, such as a high strength-to-weight ratio, efficient use of materials, and stable load distribution. In particular, each truss member primarily carries axial force with minimal influence from moments. Although their geometry is discrete, space trusses exhibit mechanical behavior similar to that of continua, making their analysis and prediction of physical behavior critically important in design, damage assessment, and maintenance [1,2]. In general, the analysis of truss structures relies on methods such as the finite element method (FEM) or the force method [2]. While these methods offer high accuracy, they incur significant computational cost when applied to large-scale systems and face limitations in capturing complex nonlinear behavior. Tasks such as dynamic analysis [2], structural stability evaluation [3], and various forms of optimization (e.g., topology, shape, and weight) [1,4,5], as well as shape control [6,7], require appropriate analysis techniques. Consequently, there is growing demand for efficient and reliable advanced analysis methods and surrogate modeling approaches. Recently, the advancement of artificial neural networks (ANNs) and the development of diverse training strategies have led to their active application in structural mechanics. ANNs have been used as surrogate models for nonlinear analysis, stability evaluation, and truss optimization, and they have also served as experimental platforms for validating novel neural network architectures. Due to their powerful ability to approximate complex functions and solve high-dimensional nonlinear problems, neural networks have been widely adopted in structural analysis [8,9,10,11]. Physics-Informed Neural Networks (PINNs), which integrate physical laws directly into the training process, have emerged as a promising approach for effectively solving structural analysis problems [12]. However, conventional neural network models often require large amounts of training data and substantial computational resources to achieve optimal generalization performance.
The implementation of quantum computing technologies has been proposed as a potential alternative to overcome these limitations. In particular, with the advent of the Noisy Intermediate-Scale Quantum (NISQ) era, various quantum algorithms have been actively investigated to transcend the boundaries of classical approaches. Among them, variational quantum algorithms (VQAs) have attracted significant attention as a methodology that solves optimization problems using quantum circuits, offering the possibility of greater expressivity with fewer parameters than conventional artificial neural networks [13]. While several heuristic-based quantum algorithms have been proposed for optimization tasks [5], circuit-based quantum approaches have shown considerable potential in a variety of engineering applications. In particular, quantum neural networks (QNNs), a subclass of VQAs, leverage quantum properties, such as entanglement and superposition, to effectively approximate high-dimensional data spaces, making them viable candidates for surrogate modeling in structural engineering problems. However, to date, no research has been reported on applying circuit-based quantum models to discrete structural systems, such as trusses. In this study, all QNN implementations were conducted on a simulator, taking into account the current limitations of quantum hardware. This represents an exploratory phase aimed at assessing the future applicability of actual quantum devices.
This study proposes a quantum neural network (QNN)-based framework for solving structural analysis problems in truss structures. Conventional neural network-based models for structural analysis primarily rely on coordinate-based inputs and often fail to adequately reflect the discrete characteristics inherent to truss systems. The proposed QNN framework instead utilizes an index-based input representation, where identifiable discrete information, such as member and node numbers, is used as input rather than spatial coordinates. This design aligns with the structural nature of trusses, where spatial interpolation is meaningless and only nodal responses carry physical significance. In particular, this study maps a single index to multiple structural features (e.g., members, nodes, and loading conditions), and it applies domain separation techniques to ensure computational flexibility and scalability in QNN training. Based on this understanding, we newly apply a QNN framework that leverages quantum entanglement and superposition to truss structure analysis, presenting new possibilities for tackling high-dimensional structural problems. Moreover, a mechanics-informed loss function based on the force method is constructed to integrate physical reliability into QNN training. This research is the first to incorporate circuit-based QNNs into the structural analysis of trusses by combining discrete index inputs with residual-based, physics-informed loss functions, thereby offering a modeling approach distinct from traditional FEM and classical neural networks. This attempt not only serves as an empirical assessment of QNN potential and performance in structural analysis, but also lays the groundwork for applying quantum machine learning to engineering problems, even when under current quantum hardware limitations. Practically, it suggests potential expansion into a novel computational framework for large-scale structural systems, while theoretically contributing to the understanding of QNN expressivity and the effectiveness of Lagrangian-duality-based loss formulations.
The main contributions of this study are as follows:
  • Proposal of a QNN framework for truss structure analysis: A quantum circuit-based QNN model was developed to analyze truss structures, demonstrating new possibilities for application.
  • Design of index-based input and domain decomposition: Instead of using coordinates, the model uses unique indices of members and nodes as inputs, and it applies domain decomposition to ensure flexibility and scalability.
  • Implementation of a mechanics-informed loss function: Physical constraints based on the force and displacement methods were integrated into a Lagrangian dual framework to construct the loss function.
  • Numerical validation: The predictive performance of the QNN was verified through 2D and 3D truss examples, and the effects of the qubit count and layer depth were analyzed.
The remainder of this paper is organized as follows. Section 2 presents the theoretical foundation for applying QNNs to structural optimization problems based on recent advances in quantum-based methods with a review of relevant prior research. Section 3 introduces the force method widely used in the analytical approach for truss structures and formulates the structural analysis problem applicable to QNNs by mathematically defining the stiffness representation, degree of freedom decomposition, and external–internal force relationships. Section 4 describes the components of the VQA framework and outlines the proposed QNN. It also defines an example problem and constructs the total loss function by incorporating the residual, penalty, and Lagrangian terms. In Section 5, the performance of the proposed QNN model is evaluated using a series of numerical experiments. This analysis includes not only prediction accuracy metrics, such as mean squared error (MSE; L 2 error) and the coefficient of determination ( R 2 ), but also convergence characteristics, the number of qubits, circuit depth, and sensitivity to individual loss components. Additionally, Section 5.4 provides a comprehensive evaluation of the proposed QNN model’s performance in a structural and comparative analysis with previous studies based on the numerical results obtained in this study. Finally, Section 6 summarizes the proposed research and results, focusing on the applicability of QNNs to structural analysis and potential advancements in quantum circuit optimization.

2. Preliminary

A truss structure is generally modeled as a system of one-dimensional elastic members that transmit only axial forces and are connected at nodes. Each member is governed by a boundary value problem (BVP) with essential and natural boundary conditions defined at its ends. Although individual members are treated as discrete systems, the overall truss structure requires a global formulation that ensures compatibility and equilibrium at the connecting joints. Among the various methods that satisfy these conditions, the finite element method (FEM) is the most widely used, ultimately leading to a system of equations based on the global stiffness matrix K .
Although the finite element method (FEM) provides high accuracy in structural analysis, its numerical stability and computational complexity increase significantly as the system size grows or when extended to high-dimensional and nonlinear problems. In particular, for pin-jointed or flexible structures, geometric nonlinearity arising from shape changes requires more efficient analytical methods. Furthermore, the stiffness matrix is assembled to satisfy the boundary value problems of individual elements, and the internal forces are indirectly computed from the global displacement field of the system. The force method was introduced as an alternative to address these limitations, and it has been continuously developed by Linkwitz and Schek (1971) [14], Schek (1974) [15], Tibert and Pellegrino (2003) [16], and Saeed (2014, 2016) [17,18]. Unlike the finite element method, the direct control method converts deformed shapes obtained through shape analysis into control actions using the principles of the force method. This approach has been highlighted for its advantages in providing intuitive control results. The force method was developed in parallel with the advancement of structural shape control techniques. Shape control was first introduced in 1984 by Weeks [19], who demonstrated its feasibility for antenna structures using a finite element model. In 1985, Haftka and Adelman [20] proposed a static shape control procedure for flexible structures by incorporating temperature-based control elements. Subsequent studies by Edberg (1987) [21], Burdisso and Haftka (1990) [22], and Sener (1994) [23] introduced control strategies based on adjusting member lengths using actuators. Precision shape control using piezoelectric actuators has been actively investigated since 1999 and in the early 2000s. Since the mid-2000s, shape control techniques have been expanded to incorporate intelligent control strategies, such as probabilistic search methods, genetic algorithms, and induced strain actuation theory. Korkmaz (2011) [24] classified these control strategies into active control, adaptive control, and intelligent control. Meanwhile, shape control using the force method gained momentum with the study by Kwan and Pellegrino in 1994 [25], who investigated the placement and operation of control members in space structures. In 1997, You [26] proposed a displacement control technique for cable structures based on the modification of the member lengths. In 2007, Dong and Yuan [27] validated the potential of shape adjustment using pre-stressed members. Kawaguchi (1996) [28] addressed the constant control problem, which involves the simultaneous control of displacements and internal forces. Subsequent studies by Xu and Luo (2009) [29], Wang (2013) [30], and Saeed (2014) [17] have further advanced the development of constant control techniques. The direct method that is based on the force method is computationally efficient. However, it has limitations in clearly defining the optimal state and requires selecting a single solution among many possibilities. To address this issue, various indirect optimization methods have been proposed. Nevertheless, certain limitations persist even when applying structural analysis, optimization, or control using the FEM or force method. Various algorithms have been introduced to overcome these challenges, including genetic algorithms, gradient-based methods, and dimensionality reduction techniques.
In recent years, quantum computing has emerged as an alternative method for solving complex control and optimization problems. In contrast to classical computing, quantum computing employs qubits as the fundamental unit of information processing. A qubit can exist in a superposition of basis states 0 and 1 and is generally represented as follows:
| ψ = α | 0 + β | 1 , α , β C , | α | 2 + | β | 2 = 1 .
The measurement causes the quantum state to collapse probabilistically into one of the basis states. Computation can be performed using quantum circuits that combine entanglement between qubits and quantum gates. These fundamental properties of quantum systems offer computational potential for efficiently exploring high-dimensional and nonlinear design spaces, which are often intractable for classical optimization methods. Based on this potential, variational quantum algorithms (VQAs) have recently emerged as promising techniques for structural analyses and optimization problems. Traditional quantum algorithms include the HHL (Harrow–Hassidim–Lloyd (HHL) algorithm for solving linear systems, Grover’s algorithm for unstructured search problems, and quantum phase estimation (QPE) for precise eigenvalue computations. However, these algorithms assume fully error-corrected quantum hardware and are often difficult to implement in current NISQ devices due to limitations in circuit depth and quantum noise. In contrast to these gate-based approaches, quantum annealing (QA) is an alternative method for quantum optimization. QA leverages quantum tunneling to solve combinatorial optimization problems, which are formulated as quadratic unconstrained binary optimization (QUBO) or Ising models, and commercial annealing hardware has been developed primarily using D-Wave Systems. Several applications of QA for structural optimization involving discrete variables, such as member selection and topology optimization, have been reported. For example, Wils (2020) [31] demonstrated size optimization of two-dimensional truss structures using quantum annealing, and Honda et al. (2024) [32] proposed a QA-based method for truss structure optimization. However, QA has inherent limitations in terms of problem applicability, accuracy, and controllability.
To overcome these limitations, hybrid quantum algorithms that combine classical computing and quantum circuits, namely the class of variational quantum algorithms (VQAs), have been proposed. Peruzzo et al. (2014) [33] introduced the variational quantum eigensolver (VQE) for solving eigenvalue problems, and, in the same year, Farhi et al. (2014) [34] proposed the quantum approximate optimization algorithm (QAOA) for solving combinatorial optimization problems. McClean et al. (2016) [35] established the theoretical foundation of VQE, and Kandala et al. (2017) [36] demonstrated its feasibility on IBM’s NISQ hardware through experiments. McClean et al. (2018) [37] identified the barren plateau phenomenon, a critical limitation that hinders learning in deep circuits due to vanishing gradients. In response, Grimsley et al. (2019) [38] proposed the ADAPT-VQE algorithm, which incrementally constructs a circuit by adding necessary gates based on the problem rather than using a fixed ansatz. This approach improves the expressivity and trainability of the VQE. Building on these developments, practical applications of structural analysis have also been expanding. Liu et al. (2024) [39] experimentally implemented eigenfrequency estimation in structural analysis by combining ABAQUS with VQE in a hybrid quantum–classical pipeline. Sato et al. (2023) [40] proposed a VQE formulation based on the Rayleigh quotient variational principle, and Lee and Kanno (2023) [41] applied a QPE-based eigenvalue computation method to structural dynamics. In the 2020s, hardware-efficient adaptive ansatz variants, such as Qubit-ADAPT-VQE, have been proposed. Recently, Kim et al. (2024) [42] demonstrated a VQE implementation using high-dimensional degrees of freedom of a single-photon qudit. Other efforts have also contributed to improving the VQE performance, including optimization techniques, such as quantum natural gradient and SPSA, as well as measurement reduction strategies, such as Hamiltonian term grouping and quantum shadow methods. As the theoretical foundations and hardware feasibility for VQA continue to mature, efforts to integrate these techniques with deep learning have also become increasingly active.
Physics-Informed Neural Networks (PINNs), first proposed by Raissi et al. (2017, 2019) [43,44] and Karniadakis (2021) [45], combine numerical analysis with neural networks and have been effectively applied to partial differential equations (PDEs) in various spatial and temporal domains. Lu et al. (2021) [46] later systematized the implementation of PINNs through the DeepXDE framework, and Wang et al. (2021) [47] discussed their limitations and directions for improvement to support practical applications. In addition to the increasing adoption of PINN-based methods, research on Quantum PINNs (QPINNs), which integrate quantum circuits with neural networks, has also gained momentum. Markidis (2022) [48] proposed a continuous-variable quantum circuit (CVQC)-based QPINN for solving the one-dimensional Poisson equation and highlighted the unique loss landscape in quantum training by showing that stochastic gradient descent (SGD) outperforms Adam in terms of convergence stability. Trahan et al. (2024) [13] applied a variational quantum circuit (VQC) and compared pure QPINNs, pure PINNs, and hybrid PINNs, demonstrating that quantum nodes can achieve similar or better accuracy with fewer parameters than classical counterparts. Xiao et al. (2024) [49] introduced a PI-QNN architecture for PDEs with periodic solutions, achieving significantly lower prediction errors than conventional PINNs. Sedykh et al. (2023) [50] proposed a hybrid QPINN (HQPINN) for fluid dynamics problems with complex geometries, demonstrating the scalability of QPINNs to more complicated structural domains. In addition, Norambuena et al. (2024) [51] applied PINN techniques to quantum optimal control and achieved higher success rates and shorter solution times in state-transition problems for open quantum systems.
However, despite its various advantages, QPINN is often inefficient for practical structural analysis because of its slow computational speed and complex network architecture. In particular, hybrid QPINNs, which interconnect quantum circuits and neural network layers in an alternating sequence (quantum–neural–quantum–neural), tend to increase the hardware resource consumption and optimization burden. Repeated insertion of quantum layers may also lead to error accumulation owing to noise. Moreover, clear limitations exist in terms of scalability to high-dimensional structural problems. To address these limitations, this study adopted a quantum neural network (QNN)-based approach, which is simpler in structure and more computationally efficient than QPINNs. QNNs can approximate input–output relationships without explicitly calculating complex differential terms, offering advantages in terms of computational speed and training stability, which are critical in structural analysis and optimization. The physical relationships in linear and nonlinear systems, such as truss structures, can be effectively modeled from a function-approximation perspective. Furthermore, by integrating advanced circuit optimization techniques, such as the quantum natural gradient and hardware-efficient ansatz, the applicability of QNNs to real-world structural systems continues to expand.

3. Force Method

3.1. Force Method and Solutions in Truss Analysis

A truss structure is modeled as an assembly of one-dimensional elastic elements that carry only axial forces and are connected at the nodes to form the structure. For each member defined over a domain x [ 0 , L ] , the displacement u ( x ) under an external axial force P x is governed by the differential equation
E A u , x x + P x = 0 , x [ 0 , L ] ,
which represents a boundary value problem (BVP) with essential and natural boundary conditions. However, because a truss is a discrete system in which multiple members are connected at joints, the boundary conditions of each member cannot be applied independently. Instead, the entire structure must satisfy equilibrium, compatibility, and flexibility conditions.
Among the various methods for obtaining such solutions, the finite element method (FEM) is the most widely used. The FEM leads to the global stiffness formulation of the structure, resulting in the following total stiffness equation:
K d = p ,
where K denotes the global stiffness matrix, d is the nodal displacement vector, and p represents the nodal force vector. Equation (3) is essentially an assembled system of equations that satisfies the boundary conditions of the individual members. Matrix K characterizes the deformation behavior and load–displacement relationship of the structure, and it corresponds to the Hessian matrix of a potential energy function, exhibiting symmetry and positive definiteness.
However, evaluating the internal forces requires first solving for nodal displacements from K and subsequently transforming the results into member coordinates. Therefore, it is difficult to directly interpret the structural behavior from K alone. In contrast, the force method provides a more direct and intuitive relationship between member forces and loads, as well as between nodal displacements and elongations, when compared with the displacement-based method.

3.2. Basic Equations and Mechanical Behavior

The fundamental equations of the force method are derived from the relationships of equilibrium, compatibility, and flexibility [6]. They are given as follows.
Consider a truss structure defined in a d-dimensional spatial domain consisting of b members and j nodes, where c degrees of freedom are constrained. The external force vector p and internal force vector t satisfy the following equilibrium equations:
A t = p .
Here, A denotes an equilibrium matrix. Meanwhile, the compatibility relationship defines the connection between the member elongation vector e and nodal displacement vector d , and it is given as
B d = e .
Here, B is the compatibility matrix, which is related to the equilibrium matrix through a transpose relationship. Because B T = A , both matrices have the same rank r. This relationship is derived from the principle of virtual work and is expressed as δ e T t = δ d T p , indicating interdependence between the two matrices.
Finally, the flexibility relationship connects the equilibrium and compatibility relations by relating the internal forces to elongations as follows:
F t = e .
Here, the flexibility matrix F is a diagonal matrix, where each diagonal element represents the flexibility of an individual member.

3.3. Solutions: Bar-Force t and Displacement d

The general solution t of the equilibrium Equation (4) can be expressed as the sum of a particular solution t p and a complementary homogeneous solution t s , i.e.,
t = t p + t s = A + p + S α .
Here, t p is obtained using the pseudo-inverse A + , and t s lies in the null space of A , which is spanned by the columns of matrix S . Vector α denotes a set of scalar coefficients.
Substituting t into the flexibility Relation (6), the member elongation vector becomes the following:
e = F t p + S α .
As the elongation vector must satisfy compatibility, it is orthogonal to the left null space of B , and, given B = A T , it follows that S T e = 0 . Substituting e yields the following expression for α :
α = S T F S 1 S T F t p .
By substituting Equation (9) into Equation (7), the general solution for internal force vector t becomes
t = A + p W s S T F S 1 S T F A + p .
Similar to t , the general solution, d of the compatibility Equation (5) can also be expressed as the sum of a particular solution d p and a complementary homogeneous solution d m . However, in kinematically indeterminate systems, it obtains a solution to Equation (4). Nevertheless, because truss structures are generally kinematically determinate systems, the particular solution corresponding to the external force given by d p = B + e serves as a valid solution to Equation (5). Instead of solving Equation (5) directly, it is more efficient to utilize the direct relationship between displacement d and external force p .
In Equation (6), the flexibility matrix F is a full-rank diagonal matrix, which is invertible. Hence, the inverse F 1 always exists. By substituting t = F 1 e into Equation (5) and then substituting the result into Equation (4), the following relationship is obtained:
A F 1 B d = p .
Therefore, d can be expressed as
d = A F 1 B 1 p = B + F A + p .
Based on this result, it follows that the stiffness matrix in Equation (3) is given by K = A F 1 B , and a solution can be obtained if K is invertible. Similarly, the general solution for the compatibility Equation (5) can be written as the sum of a particular solution d p and the complementary homogeneous solution d m . For kinematically indeterminate systems, it may be difficult to determine a general solution to Equation (4). However, most truss structures are kinematically determined, allowing for a particular solution corresponding to the external force to be expressed as d p = B + e .
Rather than solving Equation (5) directly, it is often more efficient to use a direct relationship between displacements d and external loads p . Based on the relationships derived above, the global stiffness matrix can be expressed as
K = A F 1 B .
If the matrix K is invertible, the system has a unique solution.
In conclusion, the vectors t and d given in Equations (7) and (12) are solutions to the equilibrium and compatibility equations, namely Equations (4) and (5) (or Equation (11)). Therefore, the approximated functions Q θ for t and d described in Section 4 must satisfy the constraints imposed by Equations (4) and (11).

4. QNN Model

4.1. Overall QNN Architecture

This section describes the overall architecture of the quantum neural network (QNN)-based structural analysis process proposed in this study. The flowchart presented in Figure 1 summarizes the workflow of the surrogate modeling process using QNN. The proposed indexed QNN is based on the principles of the variational quantum algorithm (VQA) and consists of three main stages: quantum encoding of the input data, optimization of a parameterized quantum circuit (variational quantum circuit, VQC), and generation of outputs via quantum measurement. The following subsection defines and explains the structure of the VQC, a core component of the QNN, along with the basic quantum operations that constitute it, such as rotation and entanglement gates.
Variational quantum algorithms (VQAs) are hybrid computational approaches that combine quantum computing with classical optimization techniques, solving optimization problems by leveraging the unitary properties of quantum systems. This concept was first introduced through the variational quantum eigenvalue (VQE) proposed by Peruzzo et al. in 2014 [33], and it has since expanded into various applications, such as the quantum approximate optimization algorithm (QAOA) and variational quantum classifier (VQC). Among these VQAs, the quantum neural network (QNN) represents a typical quantum–classical hybrid learning framework that operates based on fundamental quantum properties, such as superposition and entanglement. A QNN models the relationship between the input and output by constructing a quantum circuit in a neural network-like structure with a variational quantum circuit (VQC) composed of rotation gates ( R X , R Y , R Z ) and entangling gates (e.g., CNOT) at its core. In this section, we introduce the definitions and mathematical formulations of the fundamental operations used in the QNN implementation: rotation and entangling gates. The rotation gate is a single-qubit unitary operation that rotates the qubit state about a specific axis, and it is defined as follows:
R α ( i ) ( θ ) = exp i θ 2 σ α ( i ) , α { X , Y , Z } .
Here, I denotes the identity operator, and X, Y, and Z refer to the Pauli matrices. The R Z ( θ ) gate applies only a phase shift to basis states | 0 and | 1 . By contrast, the R X ( θ ) and R Y ( θ ) gates perform rotations in the | ± and | ± i eigenbases, respectively, thereby controlling the qubit state. The controlled-NOT (CNOT) gate is a two-qubit entangling operator that applies the Pauli-X operation (X) to the target qubit when the control qubit is in the | 1 state, thereby inducing quantum correlations between the qubits. The X operation functions as a classical NOT gate by flipping the qubit state between | 0 and | 1 . A CNOT gate is mathematically defined as follows [52]:
CNOT = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 = e i π 4 ( I 1 Z 1 ) ( I 2 X 2 ) .
Based on these fundamental quantum operations, this study designed two quantum neural network architectures to process indexed input data.
The quantum neural network (QNN) implemented in this study can be categorized into two architectures based on how the input data are processed. First, the single-domain structure encodes the entire input dataset into a unified variational quantum circuit (VQC) for processing. All input variables are mapped to a single quantum state, and the transformation and optimization of the entire data are carried out through a single VQC. This approach offers a relatively simple circuit design, and it is effective when the input dimension is low or the structural complexity is minimal Figure 2a.
In contrast, the separate-domain structure divides the input data by index domain and constructs independent quantum circuits for each domain, allowing them to be trained separately. Each circuit performs localized optimization on a specific subset of the input data, and the final output is composed by aggregating the measurement results from each circuit. This approach is particularly suitable for handling high-dimensional input data or large-scale structural analysis problems (Figure 2b).
As illustrated in Figure 2, the quantum neural network (QNN) consists of three stages: (1) a feature map (input encoding), which transforms classical inputs into quantum states; (2) variational quantum circuit (parameterized circuit), denoted as U θ , which includes parameterized rotation gates and entangling gates such as CNOT; and (3) measurement (output decoding), which extracts classical outputs from quantum states via expectation value measurements.
In particular, the entanglement between qubits in the VQC is implemented through the following sequence of operations:
ε ( l ) = CNOT ( 1 , 2 ) · CNOT ( 2 , 3 ) CNOT ( n 1 , n ) .
The entangling operation ε ( l ) is composed of a sequence of CNOT gates applied between adjacent qubit pairs, which effectively generates quantum entanglement within the circuit. This entanglement structure enhances the expressivity of the quantum circuit by enabling interactions between qubits, allowing the model to effectively capture complex data characteristics. The following section provides a detailed explanation of the input encoding method, the parameterized structure of the variational quantum circuit, and the measurement and output process, which together constitute the QNN architecture.

4.2. Input Representation: Index-Based Encoding

In this study, the proposed quantum neural network (QNN) encodes input data in an index-based structure and constructs a parameterized quantum circuit (variational quantum circuit, VQC) accordingly.
A representative encoding technique involves mapping each input value x i onto a qubit using the R Y rotation gate. The full input vector x = [ x 1 , x 2 , , x n ] T is thus transformed into a quantum state as follows:
| ψ = i = 1 n R Y ( i ) ( x i ) | 0 .
Here, R y ( i ) ( x i ) denotes the operator that applies a single-qubit rotation around the Y-axis to the i-th qubit based on the input value x i , transforming the initial state | 0 n into a quantum state that reflects the input.
The external load vector p = p j R n d j N returns the force vector p j applied at each node according to the given conditions. Each member i M returns its connectivity information via the mapping C ( i ) = a ( i ) , b ( i ) , where the connectivity structure is defined as C : M N × N , N × N = ( i , j ) i N , j N . The axial rigidity of each member is represented by the index-based expression EA ( i ) = E A ( i ) i M .
The feature vector ϕ ( i ) constructed from these index-based inputs is used to define the loss function of the quantum neural network model. Depending on the prediction target, the composition of ϕ ( i ) varies. For example, the model predicting axial force t ^ ( i ) is defined as follows:
t i = Q θ t ( ϕ t ( i ) ) , where ϕ t ( i ) = x a ( i ) , x b ( i ) , p a ( i ) , p b ( i ) , E A i .
The model for predicting nodal displacements d ^ ( j ) is defined as follows:
d j = Q θ d ( ϕ d ( j ) ) , where ϕ d ( j ) = x j , p j , EA .
Here, both quantum neural network models Q θ t and Q θ d are defined as Q ^ : N + R , where the parameter space is replaced with index-based inputs, effectively reducing the input dimensionality. Compared to coordinate-based input models, such as Q θ : R 2 n d R or Q θ : R n d R n d , this formulation is significantly more concise and enables improved scalability and flexibility of the output structure.
In addition, when using the member set M or node set N as a dataset for a single epoch, the quantum neural network can be trained without explicitly constructing separate mechanical information for each index since the equilibrium matrix A and compatibility matrix B can be directly utilized. This allows for a more efficient modeling process within the quantum neural network framework.

4.3. VQC (Variational Quantum Circuit)

This section explains how a variational quantum circuit (VQC) is constructed for the encoded quantum state. The core component of a quantum neural network (QNN), the variational quantum circuit (VQC), is constructed by repeatedly applying trainable rotations and entangling gates. The overall circuit is expressed as follows:
U ( θ ) = l = 1 L i = 1 n R Z ( i ) ( θ l , i ( Z ) ) R Y ( i ) ( θ l , i ( Y ) ) R X ( i ) ( θ l , i ( X ) ) · ε ( l ) .
Here, L denotes the number of layers, n represents the number of qubits, and ε ( l ) refers to the entangling operation in the lth layer, which is composed of a sequence of CNOT gates between adjacent qubits. The rotation gate block R Z ( i ) R Y ( i ) R x Z ( i ) is a sequence of single-qubit unitary operations applied to the ith qubit in the lth layer, rotating the qubit state around the Z, Y, and X axes. This combination forms a single S U ( 2 ) unitary operator, allowing flexible control over the position of the qubit on the complex spherical coordinate system.
R α ( i ) ( θ ) = exp i θ 2 σ α .
Therefore, applying this sequence of three rotation operations to each qubit increases the expressivity of the quantum circuit. For instance, in the case of a circuit composed of two qubits ( n = 2 ) and two layers ( L = 2 ), the parameterized circuit U ( θ ) is expanded as follows:
U ( θ ) = i = 1 2 R z ( i ) ( θ 2 , i ( z ) ) R y ( i ) ( θ 2 , i ( y ) ) R x ( i ) ( θ 2 , i ( x ) ) · CNOT ( 1 , 2 ) · i = 1 2 R z ( i ) ( θ 1 , i ( z ) ) R y ( i ) ( θ 1 , i ( y ) ) R x ( i ) ( θ 1 , i ( x ) ) · CNOT ( 1 , 2 ) .
Each layer consists of a sequence of parameterized single-qubit rotation gates ( R x , R y , R z ), which is followed by an entangling gate (CNOT). The independent parameters θ l , i ( k ) are used for each qubit i and rotation axis k in the layers l = 1 and 2. For a two-qubit system, the entangled structure is composed of a single CNOT ( 1 , 2 ) gate, which is applied identically across all the layers. Finally, applying the full quantum circuit to the encoded input state | x yields the following trainable quantum state:
| ψ ( θ ) = U ( θ ) | x .

4.4. Measurement and Output

The quantum state that passes through the VQC is measured with respect to a specific observable P, and its expectation value is used as the output. The entire circuit generates a quantum state | ψ ( θ , x ) depending on the parameter vector θ = θ l , i ( x ) , θ l , i ( y ) , θ l , i ( z ) and the input x . The expectation value was computed as follows:
P = ψ ( θ , x ) | P | ψ ( θ , x ) .
The measurement operator P is typically given in the form of a tensor product of Pauli operators applied to selected qubits, such as P = Z ( i ) or P = Z n . The resulting expectation value P is interpreted as the classical output y pred and used to compute the loss function L by evaluating the discrepancy from the target value y true as follows:
L r ( θ ) = 1 N j = 1 N P j y j true 2 .
This loss function is iteratively minimized using a classical optimizer that updates the values of the trainable parameters θ . This process forms the core structure of the variational quantum algorithm (VQA), consisting of the iterative cycle of “quantum state preparation → measurement → loss evaluation → parameter update”. By optimizing the multivariable parameters θ , the quantum circuit learns a suitable representation of the target problem.

4.5. Truss Analysis Model and Loss Function

4.5.1. Mathematical Model for Truss Structural Analysis

To design the loss function within the QNN framework described in the previous section, the static truss analysis model introduced in Section 3 is formulated as follows:
Find u R n such that A ( u ( X , z ) ) = 0 .
Here, A denotes a mechanically defined operator, X Ω s represents the spatial domain, and z Ω p denotes the set of physical parameters of the truss structure. In Equation (26), the member force model corresponds to u t , which satisfies the equilibrium Equation (4), while the nodal displacement approximation model is defined as u d , satisfying the stiffness Equation (11).
The quantum neural network (QNN) model Q θ used to approximate such boundary value problems in a finite-dimensional setting can be expressed as follows:
u ( X , z ) u ^ ( X , z , θ ) = Q θ ( X , z ) , X Ω s , z Ω p .
Here, Ω s = X R d j c X j R d , j = 1 , , m represents the spatial domain, Ω p R b × p denotes the set of physical parameters for the truss structure, and θ refers to the parameters of the QNN.
In the separate-domain structure, the input consists of index-based data, as in the single-domain case, but it is partitioned according to the corresponding indices for each separated QNN structure and embedded into each quantum circuit individually. In this case, as illustrated in Figure 1, the loss function is computed by aggregating the outputs from each quantum circuit, which means that the output dataset must preserve the mechanical information throughout. Therefore, training is conducted using the output t corresponding to the index dataset M for a single epoch (or, alternatively, using N and d ).

4.5.2. Largrangian Dual Optimization

To design a loss function for the QNN model that predicts either the member forces or nodal displacements, the problem can be formulated as an optimization problem consisting of a residual term, which is expressed using the Euclidean norm and mechanical equality constraints.
min u u ^ 2 2 subject to A u ^ = p ,
where u ^ denotes the intended solution obtained using Equation (27), A is the coefficient matrix of the equilibrium equation (or the coefficient matrix from Equation (11)), and p represents the external force vector. One possible approach for solving the equality-constrained optimization problem defined in Equation (28) is to reformulate it as an unconstrained optimization problem using a quadratic penalty function.
Let the objective function be defined as J r ( θ ) = u u ^ 2 2 and the mechanical constraint function as
C ( θ ) = p A u ^ .
Then, using J r , C and the penalty parameter β , Equation (28) can be reformulated as the following unconstrained optimization problem:
arg min θ J β ( θ ) = arg min θ J r ( θ ) + β C ( θ ) 2 2 ,
where β is a penalty parameter that follows an increasing scalar sequence across the iterations. Although the value of β significantly influences the stability of the optimization system, stability analysis is beyond the scope of this study.
The optimization model in Equation (30) represents an approach for solving the constrained optimization problem. An alternative method for solving Equation (28) is the Lagrangian multiplier technique, which is formulated as follows:
arg min θ J λ ( θ ) = max λ 0 min θ J r ( θ ) + λ · C ( θ ) .
Here, λ R N is the Lagrangian multiplier vector, which is updated at each iteration according to the learning rate η as follows:
λ ( t + 1 ) = λ ( t ) + η C ( θ ( t ) ) .
This method provides a method to obtain an optimal solution by formulating the original optimization problem together with its dual problem.
The augmented Lagrangian method, which combines the two aforementioned approaches, is described in [53]. This method augments the Lagrangian function J λ ( θ ) by incorporating an additional penalty term, and it is formulated as follows:
arg min θ J λ , β ( θ ) = max λ 0 min θ J r ( θ ) + λ · C ( θ ) + β C ( θ ) 2 2 .
The sequence of updating λ to obtain the optimal solution in Equation (33) is defined as
λ ( t + 1 ) = λ ( t ) + β C ( θ ( t ) ) .
This type of approach is known to converge stably, without requiring an increasing sequence of β values [53].
In this study, a loss-function-based optimization model was constructed for the structural analysis of truss systems. The variational quantum algorithm (VQA) encodes the input data into quantum states through a variational quantum circuit (VQC) and computes the loss function from the measured output. The parameters are then iteratively optimized using a classical optimizer, specifically the Adam algorithm.

4.5.3. Loss Functions for the Prediction of Bar Force

Based on the Lagrangian dual-optimization framework, the loss function for the member force prediction model was defined. First, the residual loss function is given by
L r ( θ ) = 1 N t t ^ 2 2 = 1 N r 2 2 ,
where r = t t ^ and t ^ denote the predicted values obtained through using Q θ . The quadratic penalty term is defined as follows:
L q ( β , θ ) = β p A t ^ 2 2 ,
where β is the penalty parameter. The loss term L q quantifies the degree of violation of the structural equilibrium condition, and it serves as an additional constraint to ensure structural consistency. Given that p = A t , Equation (26) can be equivalently expressed as
L q ( β , θ ) = β A ( t t ^ ) 2 2 = β A r 2 2 .
The loss function term that incorporates the Lagrangian multiplier is defined as follows:
L L ( λ , θ ) = λ T ( p A t ^ ) = λ T ( A r ) ,
where λ T is the Lagrangian multiplier vector, and A r represents the constraint term that directly enforces the equilibrium condition through the Lagrangian formulation.
The total loss function L r , L q , and L L , as well as the total loss function L t , can be formulated, where the optimization model based on the quadratic penalty method is expressed, as follows:
L t β ( β , θ ) = L r + L q = 1 N r 2 2 + β A r 2 2 ,
where penalty parameter β is treated as a constant. The optimization model that incorporates the Lagrangian multiplier is formulated as follows:
L t λ ( λ , θ ) = L r + L L = 1 N r 2 2 + λ T ( A r ) .
The Lagrangian multiplier λ is updated as follows:
λ i + 1 = λ i + η ( A r ) .
The augmented Lagrangian method, which combines L t β and L t λ , is defined as follows:
L t λ , β ( λ , β , θ ) = L r + L L + L q = 1 N r 2 2 + λ T ( A r ) + β A r 2 2 .
The Lagrangian multiplier λ used in L t λ , β is updated as follows:
λ i + 1 = λ i + β ( A r ) .
The model proposed in this study applies the same optimization framework to both the member force t and displacement d QNN models, except for the design variables, as follows. The residual for d is given by r = d d ^ , and the coefficient matrix becomes A [ A F 1 B ] = K . Accordingly, the residual loss is L r = 1 N r 2 2 and the total loss function is defined as
L t β ( β , θ ) = L r + β K r 2 2 L t λ ( λ , θ ) = L r + λ T ( K r ) L t λ , β ( λ , β , θ ) = L r + λ T ( K r ) + β K r 2 2 .
The updates for the Lagrangian multiplier and penalty parameter in each loss function are the same as those used in the member force model.

5. Numerical Examples

All of the experiments in this study were conducted on a macOS system equipped with a 14-core CPU, 20-core GPU, and a 16-core Neural Engine (Table 1). The implementation was based on Python 3.10, and the main software libraries used were NumPy (1.26.4) and PennyLane (0.40.0). All quantum circuit construction and optimization were performed using PennyLane’s default.qubit simulator.
The performance of each loss function case was evaluated using the mean squared error ( L 2 error) and the coefficient of determination ( R 2 ). The R 2 score and the L 2 error are defined as follows:
R 2 = 1 ( y target y pred ) 2 ( y target y ¯ ) 2 ,
L 2 error = 1 N i = 1 N y target y pred 2 .
Here, y pred denotes the predicted value, y target refers to the target value, N indicates the total number of samples, and y ¯ represents the mean value of y target . A summary of the truss and dome models used in this study is presented in Table 2.
This section validates the proposed index-based single and separate-domain QNN architecture by applying it to various numerical examples and analyzing prediction accuracy and loss functions. In particular, the number of parameters is compared according to the number of qubits and layers, and the model performance is quantitatively evaluated in terms of convergence characteristics and accuracy based on the L 2 error.

5.1. 10-Bar Plane Truss

This section presents the structural analysis of the 10-bar plane truss. As shown in Figure 3, the model consists of 10 linear members and 6 nodes, with both horizontal and vertical distances between nodes set to L = 1 m . Vertical loads of p 3 y = 1 kN and p 5 y = 1 kN are applied to Node 3 and Node 5, respectively. The objective of this analysis is to predict the axial forces in all 10 members. After encoding the input data for each member, the axial forces are predicted using a single-domain QNN model, as illustrated in Figure 2a. The material properties used in the truss model are as follows: Young’s modulus E = 2.0 × 10 11 N / m 2 , cross-sectional area A = 1.0 cm 2 , and density ρ = 7860 kg / m 3 . The training was conducted using the Adam optimizer with a learning rate of 0.05 for up to 5000 epochs.
The 10-bar plane truss, as a simple two-dimensional structure, was used to focus on analyzing the performance with respect to the number of qubits and layers. To this end, Case 1 was evaluated using various trainable single-domain quantum circuit configurations combining 2–11 qubits and 2–11 layers.
The analysis results for Case 1 are summarized in Table 3 for the range of 2–6 qubits and 2–6 layers. As the number of qubits and layers increased, the L 2 error decreased, showing a tendency of improved prediction performance. The coefficient of determination R 2 increased with deeper layers, and most configurations with six or more layers reached R 2 1.0 . In particular, both the 5Q–6L and 4Q–6L configurations achieved R 2 = 1.0 . As shown in Figure 4, the results in Area 2 exhibited satisfactory performance with R 2 values approaching 1.0, and, within the Area 1 region, a tendency of improved prediction performance was also observed as the number of qubits and layers increased. The time values in Table 3 represent the time required to complete the entire training for each qubit–layer configuration and are consistently used throughout this paper. However, simply increasing the number of qubits and layers did not always lead to improved performance, and, as shown in Figure 4a, configurations beyond 6Q–6L exhibited either reduced prediction performance or lower training efficiency. In particular, in the 7–11 qubit range, a tendency of decreasing prediction performance was observed as the number of layers increased.
The convergence curve illustrates how the L 2 error decreases with training epochs and is used to assess the model’s training stability and accuracy. Figure 5a–e shows the convergence curves for each qubit–layer configuration in Case 1. Figure 5f visualizes the loss function behavior of the 6-layer models, which showed the best performance for each qubit count. To evaluate the effectiveness of the proposed quantum neural network (QNN) architecture, a comparative analysis was also conducted using a classical neural network (NN) model on the same 10-bar plane truss problem.
The performance of a classical neural network (NN) model corresponding to Table 3 is presented in Figure 6. To enable a fair comparison, the number of layers and the number of neurons per layer in the fully-connected NN were set equal to the number of layers and qubits in the QNN. The training settings, including the Adam optimizer and learning rate, were also matched, with the activation function set to tanh. Figure 6a shows the L 2 error of the NN model ( L 2 ( D ) ), while (b) illustrates the difference between the NN and QNN errors ( L 2 ( D ) L 2 ( Q ) ). A positive value indicates that the QNN achieved superior prediction accuracy. Figure 6c depicts the parameter ratio n Q / n D , where values less than 1.0 indicate that the QNN used fewer parameters than the NN.
- 0pt
According to Table 3, the QNN achieved the lowest L 2 error when the number of layers was six, and the best performance was particularly exhibited with four qubits. Compared to the NN with a 6-layer, 4-neuron configuration, the QNN achieved better prediction results with only about 64% of the total parameters. Even for the 6-layer, 5- and 6-neuron NN configurations, which showed relatively good performance, the QNN achieved lower L 2 errors (e.g., 4.79 × 10 4 ) with only about 50% of the parameters.
Figure 7 visualizes the distribution of the coefficient of determination R 2 , L 2 error, and the epoch at which the minimum L 2 error was reached for the 2–6 qubit and 2–6 layer combinations in Case 2–4. This confirms that, as the number of qubits and layers increased, both the overall prediction performance and convergence speed improved.
As shown in Table 3, most models in Case 1 achieved high prediction accuracy based on R 2 within the 2–6 qubit and 2–6 layer range. In particular, all of the configurations converged to at least R 2 9.990 × 10 1 , with some reaching R 2 = 1.00 .
For Cases 2–4 also, as presented in Figure 7, all of the models within the Area 1 configuration (2–6 qubits and 2–6 layers) recorded excellent prediction accuracy with R 2 > 0.998 . Figure 7 visualizes the R 2 convergence curves, L 2 error values, and the minimum L 2 error epoch across different qubit–layer configurations for each case. Figure 8 provides a comparison of the predicted values y pred and the target values y target .
Therefore, the Area 1 range (2–6 qubits and 2–6 layers) was adopted as the design space for the analysis of Cases 2–4, including the common configuration of Area 3 (6 qubits and 6 layers). The detailed model configurations and experimental scopes for each single-domain problem are explained in the subsequent sections.

5.2. 25-Bar Plane/Space Truss

5.2.1. 25-Bar Plane Truss

The structural analysis of the 25-bar planar truss presented in Figure 9 is performed and discussed in this section. The target model consists of a total of 25 members and 14 nodes, where seven nodes are uniformly arranged in both the horizontal and vertical directions on the top and bottom with a spacing of L = 9.144 m . Accordingly, the total length and height of the structure are 54.864 m and 9.144 m , respectively. Vertical loads are applied in the y-direction at the top nodes of the truss, with P 1 y = P 7 y = 400.2 kN applied at Nodes 1 and 7, and P 2 y 6 y = 800.4 kN applied at Nodes 2 through 6. All members share identical material properties: Young’s modulus E = 69 × 10 9 N / m 2 , cross-sectional area A = 0.01 m 2 , and density ρ = 2700 kg / m 3 . The objective of this example was to predict the axial force developed in each member under the given loading conditions. The analysis was conducted using the single-domain model shown in Figure 2a and the separate-domain model shown in Figure 2b. The models were trained using the Adam optimizer with a learning rate of 0.05 for 10,000 iterations. For the single-domain condition, Case 1 was analyzed across a range of 2–6 qubits and 2–20 layers to enable parameter comparisons.
Table 4 presents the R 2 scores corresponding to various 2–6 qubit combinations under both the single-domain and split-domain settings. Notably, when the structure was divided into five domains, the 2-qubit–3-layer (2Q–3L) configuration achieved excellent performance with an L 2 error of 2.8102 × 10 14 and R 2 = 1.000 in only 526 epochs. In contrast, the 3-qubit–14-layer (3Q–14L) configuration under the single-domain setting required 6193 epochs to reach a comparable accuracy level with an L 2 error of 4.2138 × 10 11 . Furthermore, the split-domain model used only 90 parameters, which is approximately 6.25% fewer than the 96 parameters required in the single-domain case.
Based on these results, the same experimental settings applied to the 10-bar planar truss were extended to the 25-bar planar and space truss models. For Cases 2–4, a restricted hyperparameter range of 2–6 qubits and 2–6 layers was used, with training conducted under the split-domain strategy. Figure 10 compares the performance between the single-domain and multi-domain settings for Case 1.
According to the results in Table 5 for Case 2, the single-domain 2-qubit–16-layer (2Q–16L) model required a total of 96 parameters and 822 epochs to reach an L 2 error of 2.4883 × 10 12 . In contrast, the 2-qubit–3-layer (2Q–3L) model trained on five split domains achieved a lower L 2 error of 3.4959 × 10 14 in just 533 epochs using only 90 parameters—showing superior convergence despite a 6.25% reduction in parameter count.
A similar trend was observed in the Case3 results, as shown in Table 6. Under the condition β = 1 × 10 6 , the single-domain 3-qubit–18-layer (3Q–18L) model required 162 parameters and 6027 epochs to reach an L 2 error of 3.6791 × 10 8 . In contrast, the split-domain 2-qubit–4-layer (2Q–4L) model used only 120 parameters to achieve a significantly lower L 2 error of 2.6853 × 10 9 in just 2615 epochs.
The results in Table 7 also confirm the learning efficiency of the separate-domain approach in Case 4. In the single-domain setting, the 3-qubit–19-layer (3Q–19L) model required 2421 epochs and 171 parameters to reach an L 2 error of 2.9086 × 10 8 and R 2 = 1.0 . Meanwhile, the 2-qubit–5-layer (2Q–5L) model trained with five domains reached a lower L 2 error of 1.8233 × 10 9 and R 2 = 1.0 in just 183 epochs using only 150 parameters. These results demonstrate that, even with a reduced number of parameters, the domain-splitting strategy yields significantly improved training speed and prediction accuracy. On the other hand, under the condition β = 0.01 , the L 2 error converged to a relatively large value, indicating unsatisfactory performance.
Figure 11 and Figure 12 visualize the comparison between the predicted and true values and the L 2 error convergence behavior under the condition of training with five subdomains at β = 1 × 10 6 . Among the tested configurations, the 2Q–3L model in Case 2 converged to L 2 = 3.4959 × 10 14 at 533 epochs, and the 2Q–4L model in Case 3 reached L 2 = 2.6853 × 10 9 at 2615 epochs. Meanwhile, the 2Q–5L configuration in Case 4 achieved L 2 = 1.8233 × 10 9 in just 183 epochs.

5.2.2. 25-Bar Space Truss

This section presents the analysis of a 25-bar space truss structure and discusses the results. The target structure consists of 10 nodes and 25 members distributed in the 3D xyz space. Nodes 1 and 2 are located at the top with a height of H = 2.5 m , while fixed support conditions are applied to the four bottom nodes (Nodes 7–10). The remaining nodes (Nodes 3–6) are placed at the intermediate height z = 0 , forming an overall symmetric structure. The nodal coordinates are defined based on a horizontal spacing of L 1 = 1.0 m , a diagonal length L 2 = 2.5 m , and a vertical height H = 2.5 m . External loads are applied to the top Nodes 1 and 2, with magnitudes of p 1 y = 80 kN , p 1 z = 20 kN , p 2 y = 80 kN , and p 2 z = 20 kN . The material properties applied to the structure are as follows: Young’s modulus E = 2.0 × 10 11 N / m 2 , density ρ = 7860 kg / m 3 , and cross-sectional area A = 1.0 × 10 5 m 2 . The objective of this example was to predict the axial force developed in each member under the given loading conditions. Training was performed using the Adam optimizer with a learning rate of 0.05 for up to 10,000 iterations.
Based on the analysis results of the 25-bar truss problem, the 25-member space truss was analyzed by dividing it into five domains (Figure 13). As shown in Table 8, both Case 1 and Case 2 employed the 2Q–3L configuration with a total of 90 parameters. The L 2 errors converged to 3.4760 × 10 13 after 1208 training epochs for Case 1 and to 2.9885 × 10 13 after between 1236 and 1975 training epochs for Case 2. In Cases 3 and 4, although the number of parameters increased to the range of 180–375, satisfactory results were still obtained. Notably, in Case 4, the 3Q–5L configuration with 225 parameters achieved convergence to an L 2 error of 3.9648 × 10 10 over 9478 epochs.
Figure 14 illustrates the L 2 error convergence for the 25-bar space truss under the condition of β = 1 × 10 6 , while Figure 15 presents the comparison between the predicted values y pred and the exact results y exact for each case. In Case 3, the 2Q–6L configuration, with a total of 180 parameters, achieved an L 2 error of 7.4887 × 10 10 within 1630 epochs. For Case 4, the 3Q–5L model reached an L 2 error of 3.9648 × 10 10 at 9478 epochs. In Cases 1 and 2 converged to an L 2 error on the order of 10 13 within 1208 and 1236 epochs, respectively, yielding satisfactory results.

5.3. 6-by-6 Square Grid Dome

Figure 16 consists of a 6 × 6 grid of nodes arranged at uniform intervals of L = 5.0 m . The total number of nodes in the figure is 85, and the structure is composed of 288 members. The curved surface geometry was generated in the form of z = f ( x , y ) , with the maximum deflection occurring at the center of the structure. This study addresses the problem of predicting axial forces under varying loading conditions and geometries, and it compares the performance of QNN models under different loss function settings. Vertical loads in the z-direction were applied to the nodes, with fixed boundary conditions imposed on the bottom nodes. A uniform external load of 30 kN was applied either at the top or bottom nodes. All members in this model share identical material properties, with a Young’s modulus E = 2.0 × 10 11 N / m 2 , a density ρ = 7860 kg / m 3 , and a cross-sectional area A = 1.0 × 10 3 m 2 . Model training was conducted using the Adam optimizer with a learning rate of 0.05 for up to 10,000 iterations.
The axial force analysis results for the 6-by-6 grid dome structure are summarized in Table 9, and the prediction results are shown in Figure 17. In Case 1, the 5Q–6L configuration with a total of 2880 parameters achieved an L 2 error of 1.4219 × 10 7 and R 2 = 1.0 after 4739 training epochs. In Case 2, the 4Q–6L configuration, consisting of 2304 parameters with the same number of layers, converged to an L 2 error of 1.4032 × 10 7 under the condition β = 1 × 10 6 , requiring 7636 epochs for training. Under the same architecture and a penalty coefficient of β = 1 × 10 5 , the model reached an L 2 error of 1.6932 × 10 7 in only 570 epochs. In Case 3, the 6Q–6L model with 3456 parameters converged to L 2 errors of 1.0637 × 10 6 and 1.0364 × 10 6 for penalty parameters β = 1 × 10 6 and β = 1 × 10 5 , respectively, with training times of 316.643 s and 316.336 s. Lastly, in Case 4, the 5Q–6L configuration, comprising 2880 parameters, achieved an L 2 error of 6.8858 × 10 7 and R 2 = 0.99997 under the condition β = 1 × 10 6 , yielding a satisfactory result.
Figure 17 and Figure 18 show the prediction results and L 2 error convergence trends for the 6-by-6 grid dome structure under the condition β = 1 × 10 6 . In Case 1, the 5Q–6L configuration, with a total of 2880 parameters, achieved an L 2 error of 1.4219 × 10 7 . Case 2, using the same architecture with 2304 parameters, reached an L 2 error of 1.4032 × 10 7 , demonstrating relatively stable prediction performance. In contrast, Case 3, with a 6Q–6L configuration and 3456 parameters, converged to an L 2 error of 1.0637 × 10 6 . Case 4 also yielded satisfactory results, achieving an L 2 error of 6.8858 × 10 7 with 2880 parameters.
The displacement analysis results for the 6-by-6 square grid dome structure are summarized in Table 10, while Figure 19 presents the prediction results under the condition β = 1 × 10 6 . The L 2 error convergence trend is shown in Figure 20. The displacement control formulation described in Equation (44) of Section 4.5.3 was applied using the same parameter values as in the axial force analysis.
In Case 1, the 3Q–6L configuration with a total of 1458 parameters achieved an L 2 error of 2.5581 × 10 9 and R 2 = 1.0 within 762 training epochs. In Case 2, the same 3Q–6L architecture with 1458 parameters converged to an L 2 error of 1.9097 × 10 10 and R 2 = 1.0 in 946 epochs under the condition β = 1 × 10 6 . In contrast, under the condition β = 1 × 10 5 , the model with 2430 parameters reached an L 2 error of 4.6265 × 10 9 after 9674 epochs. In Case 3, the 6Q–6L configuration with 2916 parameters achieved an L 2 error of 2.5805 × 10 9 in 662 epochs under β = 1 × 10 6 . Conversely, under β = 1 × 10 5 , the 4Q–6L model with 1944 parameters converged to an L 2 error of 4.3679 × 10 8 in 5633 epochs. In Case 4, the 6Q–6L configuration with 2916 parameters achieved an L 2 error of 3.4688 × 10 9 and R 2 = 1.0 in 659 epochs under β = 1 × 10 6 . Under the same condition, the 3Q–6L configuration with 1458 parameters reached an L 2 error of 1.0672 × 10 7 at 1103 epochs, yielding reasonably satisfactory results.

5.4. Discussion

This study performed structural analyses on various truss systems by integrating an index-based quantum neural network (QNN) architecture with a mechanics-informed loss function. The predictive performance was compared with that of a conventional neural network (NN), and convergence characteristics were quantitatively analyzed depending on structural types and hyperparameter settings.
In the case of the 10-bar planar truss (Case 1), the QNN produced lower L 2 errors with fewer parameters compared to the NN. For example, the 4Q–6L QNN configuration achieved an L 2 error of 1.9314 × 10 10 while using approximately 64% of the parameters required by the NN. In some configurations, the QNN yielded acceptable results with less than half the number of parameters.
To examine the impact of the proposed indexed QNN architecture settings, a grid search was conducted over 2–11 qubits and 2–11 layers, and the distribution of the coefficient of determination ( R 2 ) was evaluated. The results indicated that training performance remained stable around 6 layers, while configurations with more than 7 qubits showed decreasing predictive accuracy as the number of layers increased. Accordingly, the training for Cases 2 through 4 was limited to 2–6 qubits and 2–6 layers. The domain decomposition method also showed a reduction in parameter count.
Similar patterns were observed in the analysis of the 25-bar truss. In Case 2, the single-domain 2Q–16L model required 96 parameters and 822 epochs to reach an L 2 error of 2.4883 × 10 12 , whereas the domain-split 2Q–3L model reached an L 2 error of 3.4959 × 10 14 in 533 epochs with 90 parameters. In Case 3, the single-domain 3Q–18L model reached an L 2 error of 3.6791 × 10 8 using 162 parameters and 6027 epochs, while the domain-split 2Q–4L model achieved 2.6853 × 10 9 in 2615 epochs with 120 parameters. In Case 4, the single-domain 3Q–19L model required 171 parameters and 2421 epochs to achieve 2.9086 × 10 8 , whereas the domain-split 2Q–5L model reached 1.8233 × 10 9 in 183 epochs with 150 parameters.
The domain decomposition method was also applied to the 25-bar space truss. In both Case 1 and Case 2, the same 2Q–3L configuration was used, and the models converged to L 2 errors of 3.4760 × 10 13 and 2.9885 × 10 13 at 1208 and 1236 epochs, respectively, each with 90 parameters. In Case 3, the 2Q–6L model reached 7.4887 × 10 10 in 1630 epochs using 180 parameters. In Case 4, the 3Q–5L model achieved 3.9648 × 10 10 in 9478 epochs with 225 parameters, indicating acceptable prediction performance.
Axial force and displacement analysis was also conducted for the 6×6 grid dome structure. In Case 1, the 5Q–6L model converged to an L 2 error of 1.4219 × 10 7 . In Case 2, the 4Q–6L model under the condition β = 1 × 10 6 achieved an error of 1.4032 × 10 7 , and, under β = 1 × 10 5 , the same model reached 1.6932 × 10 7 in 570 epochs. In Case 3 and Case 4, the 6Q–6L and 5Q–6L models, respectively, converged to errors on the order of 10 6 , indicating that the QNN framework was applicable, even to complex structural configurations.
These results show that the domain decomposition method reduced the total number of parameters by approximately 6.25% to 25% while also yielding acceptable results in terms of prediction accuracy and convergence speed compared to single-domain models. The experiments demonstrated that the indexed QNN based on domain decomposition produced valid results in the structural analysis.

6. Conclusions

In this study, an index-based quantum neural network (QNN) based on a variational quantum circuit (VQC) is proposed as a surrogate model for the analysis of axial force and displacement in truss structures. The proposed QNN adopts a discrete input format based on member and node indices instead of coordinate-based representations, and it also introduces a domain-separate strategy. The loss function includes physics-based constraints derived from the force method and displacement method, and it is formulated using an augmented Lagrangian approach to naturally incorporate structural mechanics information into the training process.
Numerical experiments were conducted on various structural configurations, including the 10-bar and 25-bar planar and spatial trusses and the 6 × 6 grid dome. For relatively simple structures, stable learning was achieved through shallow circuit configurations, and, as the structural complexity increased, the domain-separate strategy showed advantageous performance in terms of parameter efficiency. In particular, the domain-separate strategy was able to achieve satisfactory results, even with the same or fewer number of parameters compared to the single-domain configuration.
According to the performance analysis based on the number of qubits and circuit depth, the most stable convergence tendency appeared around six layers, and increasing the number of qubits did not always lead to improved performance. This suggests that, when applying QNNs to structural analysis problems, circuit expansion alone is not sufficient, and modeling strategies, such as domain separation, must also be applied together.
In conclusion, this study numerically validated the practical applicability of QNN-based structural analysis even when under limited quantum resources. Future research directions include the design of quantum circuits robust to noise, the optimization of adaptive ansatz structures, and the application of quantum natural gradient-based learning methods. These approaches are expected to contribute to expanding QNN-based numerical analysis to advanced structural applications, such as real-time monitoring, damage detection, and optimal design.
Furthermore, in order to enhance the performance of QNNs and perform more precise structural analysis, an in-depth investigation into the role of quantum entanglement is necessary. Although this study used quantum circuits with strong entanglement, future studies should systematically analyze the effects of entanglement strength, depth, and configuration on prediction accuracy and convergence characteristics. In particular, theoretical and numerical validation of the synergy between domain separation and entanglement structure could provide insights into the efficient learning of complex structural interactions and enhance the practicality and scalability of QNN-based structural analysis.

Author Contributions

Conceptualization, S.S., S.L. and H.H.; methodology, S.S. and H.H.; software, S.S. and H.H.; validation, S.S., H.H. and S.L.; formal analysis, H.H.; investigation, H.H.; resources, S.S. and H.H.; data curation, H.H.; writing—original draft preparation, H.H.; writing—review and editing, H.H. and S.S.; visualization, H.H.; supervision, S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), which was funded by the Ministry of Education (RS-2023-00248809 and RS-2024-00413824), as well as by the NRF grant, which was funded by the Ministry of Science and ICT (RS-2024-00352968).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available because of technical limitations. Requests to access the datasets should be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Notations

The notations used in this study are listed below.
SymbolDescription
x i Each input value
x j Coordinate vector of node j
p j External force vector applied at node j
p Nodal force vector
d True nodal displacement vector
d ^ Predicted nodal displacement vector
e Elongation vector
u Extracted structural solution
u ^ Intended solution from QNN
N Set of node indices
M Set of member indices
C ( i ) Connectivity of member i
NNumber of training samples
EA ( i ) Axial rigidity for member i
A Equilibrium matrix
B Compatibility matrix
F Flexibility matrix (diagonal)
K Global stiffness matrix
t True member force vector
t ^ Predicted member force vector
t p Particular solution
t s Complementary homogeneous solution
W s Null space of A
| 0 n Initial quantum state
U θ Variational quantum circuit
| ψ ( θ , x ) Quantum state after VQC
R X , R Y , R Z Single-qubit rotation gates
R rot ( x ) Generalized rotation gate
PMeasurement operator
P Expectation value of P
L r Residual loss term
L q Quadratic penalty loss term
L L Lagrangian loss term
L t β Total loss with penalty term
L t λ Total loss with Lagrangian term
L t λ , β Augmented Lagrangian loss
λ Lagrangian multiplier vector
β Penalty parameter
η Learning rate for λ
rResidual vector
J r Objective function
C ( θ ) Mechanical constraint function
Ω s Spatial domain
Ω p Set of physical parameters
θ Parameter vector
α General rotation or model parameter
EElastic modulus
p x Axial force
u ( x ) Displacement function

References

  1. Nourian, N.; El-Badry, M.; Jamshidi, M. Design optimization of truss structures using a graph neural network-based surrogate model. Algorithms 2023, 16, 380. [Google Scholar] [CrossRef]
  2. Shon, S.; Lee, S.; Ha, J.; Cho, C. Semi-analytic solution and stability of a space truss using a high-order Taylor series method. Materials 2015, 8, 2400–2414. [Google Scholar] [CrossRef]
  3. Deng, H.; Kwan, A.S.K. Unified classification of stability of pin-jointed bar assemblies. Int. J. Solids Struct. 2005, 42, 4393–4413. [Google Scholar] [CrossRef]
  4. Kaveh, A.; Zolghadr, A. Topology optimization of trusses considering static and dynamic constraints using the CSS. Appl. Soft Comput. 2013, 13, 2727–2734. [Google Scholar] [CrossRef]
  5. Lee, D.; Shon, S.; Lee, S.; Ha, J. Size and topology optimization of truss structures using quantum-based HS algorithm. Buildings 2023, 13, 1436. [Google Scholar] [CrossRef]
  6. Shon, S.; Kwan, A.S.; Lee, S. Shape control of cable structures considering concurrent/sequence control. Struct. Eng. Mech. Int. J. 2014, 52, 919–935. [Google Scholar] [CrossRef]
  7. Saeed, N.M.; Manguri, A.A.; Szczepanski, M.; Jankowski, R.; Haydar, B.A. Static shape and stress control of trusses with optimum time, actuators and actuation. Int. J. Civ. Eng. 2023, 21, 379–390. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  9. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; Volume 25. [Google Scholar]
  10. Bianco, M.J.; Gerstoft, P.; Traer, J.; Ozanich, E.; Roch, M.A.; Gannot, S.; Deledalle, C.-A. Machine learning in acoustics: Theory and applications. J. Acoust. Soc. Am. 2019, 146, 3590–3628. [Google Scholar] [CrossRef]
  11. Lee, S.Y.; Chang, J.; Lee, S. Deep learning-enabled high-resolution and fast sound source localization in spherical microphone array system. IEEE Trans. Instrum. Meas. 2022, 71, 2506112. [Google Scholar] [CrossRef]
  12. Yuan, F.-G.; Zargar, S.A.; Chen, Q.; Wang, S. Machine learning for structural health monitoring: Challenges and opportunities. Sens. Smart Struct. Technol. Civil Mech. Aerosp. Syst. 2020 2020, 11379, 1137903. [Google Scholar] [CrossRef]
  13. Trahan, C.; Loveland, M.; Dent, S. Quantum Physics-Informed Neural Networks. Entropy 2024, 26, 649. [Google Scholar] [CrossRef] [PubMed]
  14. Linkwitz, K.; Schek, H.-J. Einige bemerkungen zur berechnung von vorgespannten seilnetzkonstruktionen. Ingenieur-archiv 1971, 40, 145–158. [Google Scholar] [CrossRef]
  15. Schek, H.-J. The force density method for form finding and computation of general networks. Comput. Methods Appl. Mech. Eng. 1974, 3, 115–134. [Google Scholar] [CrossRef]
  16. Tibert, G.; Pellegrino, S. Deployable tensegrity masts. In Proceedings of the 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Norfolk, VA, USA, 7–10 April 2003; p. 1978. [Google Scholar] [CrossRef]
  17. Saeed, N. Prestress and Deformation Control in Flexible Structures. Ph.D. Thesis, Cardiff University, Cardiff, UK, 2014. Available online: https://orca.cardiff.ac.uk/id/eprint/69777 (accessed on 1 April 2025).
  18. Saeed, N.M.; Kwan, A.S.K. Displacement and force control of complex element structures by matrix condensation. Struct. Eng. Mech. 2016, 59, 973–992. [Google Scholar] [CrossRef]
  19. Weeks, C.J. Static Shape Determination and Control for Large Space Structures: I. The Flexible Beam. J. Dyn. Syst. Meas. Control 1984, 106, 261–266. [Google Scholar] [CrossRef]
  20. Haftka, R.T.; Adelman, H.M. An Analytical Investigation of Shape Control of Large Space Structures by Applied Temperatures. AIAA J. 1985, 23, 450–457. [Google Scholar] [CrossRef]
  21. Edberg, D.L. Control of Flexible Structures by Applied Thermal Gradients. AIAA J. 1987, 25, 877–883. [Google Scholar] [CrossRef]
  22. Burdisso, R.A.; Haftka, R.T. Statistical Analysis of Static Shape Control in Space Structures. AIAA J. 1990, 28, 1504–1508. [Google Scholar] [CrossRef]
  23. Sener, M.; Utku, S.; Wada, B.K. Geometry Control in Prestressed Adaptive Space Trusses. Smart Mater. Struct. 1994, 3, 219. [Google Scholar] [CrossRef]
  24. Korkmaz, S. A Review of Active Structural Control: Challenges for Engineering Informatics. Comput. Struct. 2011, 89, 2113–2132. [Google Scholar] [CrossRef]
  25. Kwan, A.S.K.; Pellegrino, S. Matrix Formulation of Macro-Elements for Deployable Structures. Comput. Struct. 1994, 50, 237–254. [Google Scholar] [CrossRef]
  26. You, Z. Displacement Control of Prestressed Structures. Comput. Methods Appl. Mech. Eng. 1997, 144, 51–59. [Google Scholar] [CrossRef]
  27. Dong, S.; Yuan, X. Pretension Process Analysis of Prestressed Space Grid Structures. J. Constr. Steel Res. 2007, 63, 406–411. [Google Scholar] [CrossRef]
  28. Kawaguchi, K.-I.; Hangai, Y.; Pellegrino, S.; Furuya, H. Shape and Stress Control Analysis of Prestressed Truss Structures. J. Reinf. Plast. Compos. 1996, 15, 1226–1236. [Google Scholar] [CrossRef]
  29. Xu, X.; Luo, Y. Non-Linear Displacement Control of Prestressed Cable Structures. Proc. Inst. Mech. Eng. G J. Aerosp. Eng. 2009, 223, 1001–1007. [Google Scholar] [CrossRef]
  30. Wang, Z.; Li, T.; Cao, Y. Active Shape Adjustment of Cable Net Structures with PZT Actuators. Aerosp. Sci. Technol. 2013, 26, 160–168. [Google Scholar] [CrossRef]
  31. Wils, K.A. Quantum Computing for Structural Optimization. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 2020. Available online: http://resolver.tudelft.nl/uuid:5167107a-7c26-4779-825a-e99702949870 (accessed on 1 April 2025).
  32. Honda, R.; Endo, K.; Kaji, T.; Suzuki, Y.; Matsuda, Y.; Tanaka, S.; Muramatsu, M. Development of Optimization Method for Truss Structure by Quantum Annealing. Sci. Rep. 2024, 14, 13872. [Google Scholar] [CrossRef]
  33. Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.-H.; Zhou, X.-Q.; Love, P.J.; Aspuru-Guzik, Á.; O’Brien, J.L. A Variational Eigenvalue Solver on a Photonic Quantum Processor. Nat. Commun. 2014, 5, 4213. [Google Scholar] [CrossRef]
  34. Farhi, E.; Goldstone, J.; Gutmann, S. A Quantum Approximate Optimization Algorithm. arXiv 2014, arXiv:1411.4028. [Google Scholar] [CrossRef]
  35. McClean, J.R.; Romero, J.; Babbush, R.; Aspuru-Guzik, Á. The Theory of Variational Hybrid Quantum-Classical Algorithms. New J. Phys. 2016, 18, 023023. [Google Scholar] [CrossRef]
  36. Kandala, A.; Mezzacapo, A.; Temme, K.; Takita, M.; Brink, M.; Chow, J.M.; Gambetta, J.M. Hardware-Efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets. Nature 2017, 549, 242–246. [Google Scholar] [CrossRef] [PubMed]
  37. McClean, J.R.; Boixo, S.; Smelyanskiy, V.N.; Babbush, R.; Neven, H. Barren Plateaus in Quantum Neural Network Training Landscapes. Nat. Commun. 2018, 9, 4812. [Google Scholar] [CrossRef] [PubMed]
  38. Grimsley, H.R.; Economou, S.E.; Barnes, E.; Mayhall, N.J. An Adaptive Variational Algorithm for Exact Molecular Simulations on a Quantum Computer. Nat. Commun. 2019, 10, 3007. [Google Scholar] [CrossRef]
  39. Liu, Y.; Liu, J.; Raney, J.R.; Wang, P. Quantum Computing for Solid Mechanics and Structural Engineering—A Demonstration with Variational Quantum Eigensolver. Extreme Mech. Lett. 2024, 67, 102117. [Google Scholar] [CrossRef]
  40. Sato, Y.; Kondo, R.; Koide, S.; Kajita, S. Quantum Topology Optimization of Ground Structures for Near-Term Devices. In Proceedings of the 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Bellevue, WA, USA, 17–22 September 2023; pp. 168–176. [Google Scholar] [CrossRef]
  41. Lee, Y.; Kanno, K. Modal Analysis on Quantum Computers via Qubitization. arXiv 2023, arXiv:2307.07478. [Google Scholar] [CrossRef]
  42. Kim, B.; Hu, K.-M.; Sohn, M.-H.; Kim, Y.; Kim, Y.-S.; Lee, S.-W.; Lim, H.-T. Qudit-Based Variational Quantum Eigensolver Using Photonic Orbital Angular Momentum States. Sci. Adv. 2024, 10, eado3472. [Google Scholar] [CrossRef]
  43. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Deep Learning (Part I): Data-Driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar] [CrossRef]
  44. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  45. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-Informed Machine Learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  46. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A Deep Learning Library for Solving Differential Equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  47. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  48. Markidis, S. On Physics-Informed Neural Networks for Quantum Computers. Front. Appl. Math. Stat. 2022, 8, 1036711. [Google Scholar] [CrossRef]
  49. Xiao, Y.; Yang, L.M.; Shu, C.; Chew, S.C.; Khoo, B.C.; Cui, Y.D.; Liu, Y.Y. Physics-Informed Quantum Neural Network for Solving Forward and Inverse Problems of Partial Differential Equations. Phys. Fluids 2024, 36. [Google Scholar] [CrossRef]
  50. Sedykh, A.; Podapaka, M.; Sagingalieva, A.; Pinto, K.; Pflitsch, M.; Melnikov, A. Hybrid Quantum Physics-Informed Neural Networks for Simulating Computational Fluid Dynamics in Complex Shapes. Mach. Learn. Sci. Technol. 2024, 5, 025045. [Google Scholar] [CrossRef]
  51. Norambuena, A.; Mattheakis, M.; González, F.J.; Coto, R. Physics-Informed Neural Networks for Quantum Control. Phys. Rev. Lett. 2024, 132, 010801. [Google Scholar] [CrossRef]
  52. LaPierre, R. Introduction to Quantum Computing; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  53. Hwang, H.J.; Son, H. Lagrangian Dual Framework for Conservative Neural Network Solutions of Kinetic Equations. Kinet. Relat. Models 2022, 15, 551–568. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed structural analysis process using QNN.
Figure 1. Flowchart of the proposed structural analysis process using QNN.
Biomimetics 10 00407 g001
Figure 2. Two types of quantum neural network models: (a) single-domain model using a single quantum circuit (VQC); and (b) multi-domain model that separates the input into multiple domains, each processed in parallel by an independent quantum circuit.
Figure 2. Two types of quantum neural network models: (a) single-domain model using a single quantum circuit (VQC); and (b) multi-domain model that separates the input into multiple domains, each processed in parallel by an independent quantum circuit.
Biomimetics 10 00407 g002
Figure 3. Configurations of a 10-bar plane truss.
Figure 3. Configurations of a 10-bar plane truss.
Biomimetics 10 00407 g003
Figure 4. Performance evaluation of the QNN models with 2 to 11 qubits and layers. (a) R 2 score distribution for Case 1. (b) L 2 error for varying qubit and layer configurations.
Figure 4. Performance evaluation of the QNN models with 2 to 11 qubits and layers. (a) R 2 score distribution for Case 1. (b) L 2 error for varying qubit and layer configurations.
Biomimetics 10 00407 g004
Figure 5. L 2 error curves for the 10-bar plane truss at β = 1 × 10 2 . (ae) Qubit–layer combinations within Area 1 (2–6 qubits, 2–6 layers). (f) Fixed 6-layer configurations across 2–6 qubits at the minimum L 2 epoch.
Figure 5. L 2 error curves for the 10-bar plane truss at β = 1 × 10 2 . (ae) Qubit–layer combinations within Area 1 (2–6 qubits, 2–6 layers). (f) Fixed 6-layer configurations across 2–6 qubits at the minimum L 2 epoch.
Biomimetics 10 00407 g005
Figure 6. Comparison with previous studies: (a) L 2 error of the neural network model ( L 2 ( D ) ), (b) error difference between classical and quantum models ( L 2 ( D ) L 2 ( Q ) ), and (c) the parameter ratio between quantum and neural network models ( n ( Q ) / n ( N ) ).
Figure 6. Comparison with previous studies: (a) L 2 error of the neural network model ( L 2 ( D ) ), (b) error difference between classical and quantum models ( L 2 ( D ) L 2 ( Q ) ), and (c) the parameter ratio between quantum and neural network models ( n ( Q ) / n ( N ) ).
Biomimetics 10 00407 g006
Figure 7. The 10-bar plane truss R 2 , MSE, and minimum L 2 error epoch heatmaps for β = 1 × 10 2 : (a) Case 2, (b) Case 3, (c) and Case 4.
Figure 7. The 10-bar plane truss R 2 , MSE, and minimum L 2 error epoch heatmaps for β = 1 × 10 2 : (a) Case 2, (b) Case 3, (c) and Case 4.
Biomimetics 10 00407 g007
Figure 8. Comparison of y pred and y exact for β = 1 × 10 2 : (a) Case 1: 4Q–6L; (b) Case 2: 4Q–6L; (c) Case 3: 5Q–6L; and (d) Case 4: 4Q–6L.
Figure 8. Comparison of y pred and y exact for β = 1 × 10 2 : (a) Case 1: 4Q–6L; (b) Case 2: 4Q–6L; (c) Case 3: 5Q–6L; and (d) Case 4: 4Q–6L.
Biomimetics 10 00407 g008
Figure 9. Configurations of a 25-bar plane truss.
Figure 9. Configurations of a 25-bar plane truss.
Biomimetics 10 00407 g009
Figure 10. R 2 distributions for a QNN-based analysis of the 25-bar truss (Case 1): (a) Single-domain model with 2–11 qubits and 2–11 layers. (b) Five-domain model with 2–6 qubits and 2–6 layers.
Figure 10. R 2 distributions for a QNN-based analysis of the 25-bar truss (Case 1): (a) Single-domain model with 2–11 qubits and 2–11 layers. (b) Five-domain model with 2–6 qubits and 2–6 layers.
Biomimetics 10 00407 g010
Figure 11. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 5 Domain-2Q-3L, (b) Case 2: 5 Domain-2Q-3L, (c) Case 3: 5 Domain-2Q-4L, and (d) Case 4: 5 Domain-2Q-5L.
Figure 11. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 5 Domain-2Q-3L, (b) Case 2: 5 Domain-2Q-3L, (c) Case 3: 5 Domain-2Q-4L, and (d) Case 4: 5 Domain-2Q-5L.
Biomimetics 10 00407 g011
Figure 12. L 2 error comparison graph for the 25-bar truss with 5 domains, β = 1 × 10 6 .
Figure 12. L 2 error comparison graph for the 25-bar truss with 5 domains, β = 1 × 10 6 .
Biomimetics 10 00407 g012
Figure 13. Configurations of the 25-bar space truss: (a) 3D view; (b) side view in the yz plane; (c) side view in the xz plane; and (d) top view.
Figure 13. Configurations of the 25-bar space truss: (a) 3D view; (b) side view in the yz plane; (c) side view in the xz plane; and (d) top view.
Biomimetics 10 00407 g013
Figure 14. L 2 error comparison graph for the 25-bar space truss with 5 domains, β = 1 × 10 6 .
Figure 14. L 2 error comparison graph for the 25-bar space truss with 5 domains, β = 1 × 10 6 .
Biomimetics 10 00407 g014
Figure 15. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 5 Domain-2Q-3L, (b) Case 2: 5 Domain-2Q-3L, (c) Case 3: 5 Domain-2Q-6L, and (d) Case 4: 5 Domain-3Q-5L.
Figure 15. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 5 Domain-2Q-3L, (b) Case 2: 5 Domain-2Q-3L, (c) Case 3: 5 Domain-2Q-6L, and (d) Case 4: 5 Domain-3Q-5L.
Biomimetics 10 00407 g015
Figure 16. Geometry of the 6-by-6 square grid dome. (a) Plan view of the xy plane, and (b,c) perspective views.
Figure 16. Geometry of the 6-by-6 square grid dome. (a) Plan view of the xy plane, and (b,c) perspective views.
Biomimetics 10 00407 g016
Figure 17. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 32 Domain-5Q-6L, (b) Case 2: 32 Domain-4Q-6L, (c) Case 3: 32 Domain-6Q-6L, and (d) Case 4: 32 Domain-5Q-6L.
Figure 17. Comparison of y pred and y exact for β = 1 × 10 6 : (a) Case 1: 32 Domain-5Q-6L, (b) Case 2: 32 Domain-4Q-6L, (c) Case 3: 32 Domain-6Q-6L, and (d) Case 4: 32 Domain-5Q-6L.
Biomimetics 10 00407 g017
Figure 18. L 2 error comparison graph for the 6-by-6 square grid dome with 32 domains, β = 1 × 10 6 .
Figure 18. L 2 error comparison graph for the 6-by-6 square grid dome with 32 domains, β = 1 × 10 6 .
Biomimetics 10 00407 g018
Figure 19. Comparison of QNN performance for β = 1 × 10 6 : (a) Case 1: 27 Domain-3Q-6L, (b) Case 2: 27 Domain-3Q-6L, (c) Case 3: 27 Domain-6Q-6L, and (d) Case 4: 27 Domain-6Q-6L.
Figure 19. Comparison of QNN performance for β = 1 × 10 6 : (a) Case 1: 27 Domain-3Q-6L, (b) Case 2: 27 Domain-3Q-6L, (c) Case 3: 27 Domain-6Q-6L, and (d) Case 4: 27 Domain-6Q-6L.
Biomimetics 10 00407 g019
Figure 20. L 2 error comparison graph for the 6-by-6 square grid dome with 27 domains, β = 1 × 10 6 .
Figure 20. L 2 error comparison graph for the 6-by-6 square grid dome with 27 domains, β = 1 × 10 6 .
Biomimetics 10 00407 g020
Table 1. Loss function configurations for QNN training.
Table 1. Loss function configurations for QNN training.
CaseLoss Function Expression
Case 1 (Residual Only) L r
Case 2 (Quadratic Penalty) L r + β L p
Case 3 (Lagrangian Only) * L r + λ T L p
Case 4 (Augmented Lagrangian) * L r + λ T L p + β L p
* In Case 3, the Lagrangian multiplier λ was updated based on a learning rate η , while, in Case 4, λ was updated based on the penalty parameter β (see Equations (41) and (43)).
Table 2. Summary of the truss and dome example models used for structural analysis.
Table 2. Summary of the truss and dome example models used for structural analysis.
Structure TypeNodesElementsProblem
10-bar Plane Truss610Force
25-bar Plane Truss1425Force
25-bar Space Truss1025Force
6-by-6 Square Grid Dome85288 *Force and Displacement
* For the displacement prediction model, the number of degrees of freedom (DOF) was 243.
Table 3. Case 1—10-bar plane truss: detailed comparative results.
Table 3. Case 1—10-bar plane truss: detailed comparative results.
QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
22123013.4171 × 10 1 2.3920 × 10 1 4.6522 × 10 1 1.28913
23186352.3867 × 10 1 4.6861 × 10 1 4.9101 × 10 1 1.16465
242449903.0677 × 10 2 9.3170 × 10 1 6.0451 × 10 2 1.32921
253049751.1300 × 10 2 9.7484 × 10 1 2.0962 × 10 2 1.44017
263650008.4655 × 10 5 9.9981 × 10 1 2.4227 × 10 4 1.63554
32182693.4171 × 10 1 1.0136 × 10 0 2.3920 × 10 1 1.58692
332726782.3867 × 10 1 6.8796 × 10 1 4.6861 × 10 1 1.41797
34365151.4707 × 10 2 2.2248 × 10 1 9.6726 × 10 1 1.62781
354515491.1300 × 10 2 1.6334 × 10 1 9.7484 × 10 1 2.07655
365439025.2657 × 10 5 1.1168 × 10 2 9.9988 × 10 1 2.31483
42246553.4171 × 10 1 1.0136 × 10 0 2.3920 × 10 1 1.54701
433612742.3867 × 10 1 6.8796 × 10 1 4.6861 × 10 1 2.14466
444848712.9703 × 10 2 2.7311 × 10 1 9.3387 × 10 1 1.90128
456042431.1300 × 10 2 1.6331 × 10 1 9.7484 × 10 1 2.26965
467245321.9314 × 10 10 2.7657 × 10 5 1.0000 × 10 0 3.07850
523019713.4171 × 10 1 1.0136 × 10 0 2.3920 × 10 1 1.78833
53453942.3867 × 10 1 6.8796 × 10 1 4.6861 × 10 1 1.98362
546043521.4707 × 10 2 2.2243 × 10 1 9.6726 × 10 1 2.41754
557545921.1377 × 10 2 1.7084 × 10 1 9.7467 × 10 1 2.73436
569044445.3161 × 10 7 1.1944 × 10 3 1.0000 × 10 0 4.30919
623625913.4171 × 10 1 1.0137 × 10 0 2.3920 × 10 1 2.13546
635425272.3867 × 10 1 6.8802 × 10 1 4.6861 × 10 1 2.34046
647249763.1399 × 10 2 2.8509 × 10 1 9.3009 × 10 1 2.87620
659042881.1301 × 10 2 1.6325 × 10 1 9.7484 × 10 1 5.02538
6610849754.7900 × 10 4 3.4637 × 10 2 9.9893 × 10 1 4.10551
Table 4. Case 1—25-bar plane truss: detailed comparative results.
Table 4. Case 1—25-bar plane truss: detailed comparative results.
Domain = 1 (Single Domain)
QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
2169611542.5613 × 10 12 2.5613 × 10 12 1.0000 × 10 0 5.40458
31412661934.2138 × 10 11 4.2138 × 10 11 1.0000 × 10 0 6.94795
4141687162.8528 × 10 13 2.8528 × 10 13 1.0000 × 10 0 8.36999
51421064543.2832 × 10 0 3.2832 × 10 13 1.0000 × 10 0 15.98381
61425259774.0991 × 10 13 4.0991 × 10 13 1.0000 × 10 0 19.66124
Domain = 5 (Separated Domain)
QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
23905262.8102 × 10 14 2.9802 × 10 07 1.0000 × 10 0 5.08589
331355202.8386 × 10 14 3.5763 × 10 7 1.0000 × 10 0 6.77821
431803501.8270 × 10 14 2.9802 × 10 7 1.0000 × 10 0 7.56486
532253103.4150 × 10 14 4.4703 × 10 7 1.0000 × 10 0 10.51405
632703194.0323 × 10 14 4.7684 × 10 7 1.0000 × 10 0 15.34173
Table 5. Case 2—25-bar plane truss: detailed comparative results.
Table 5. Case 2—25-bar plane truss: detailed comparative results.
Domain = 1 (Single Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 216968222.4883 × 10 12 3.4959 × 10 14 1.0000 × 10 0 5.02629
1 × 10 5 216967884.3938 × 10 12 4.3939 × 10 12 1.0000 × 10 0 5.89781
1 × 10 2 216968222.4883 × 10 12 2.5561 × 10 12 1.0000 × 10 0 5.83060
Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 23905333.4959 × 10 14 3.4959 × 10 14 1.0000 × 10 0 5.02629
1 × 10 5 23903593.9277 × 10 14 3.9278 × 10 14 1.0000 × 10 0 5.20799
1 × 10 2 23903862.9523 × 10 14 3.0109 × 10 14 1.0000 × 10 0 5.11479
Table 6. Case 3—25-bar plane truss: detailed comparative results.
Table 6. Case 3—25-bar plane truss: detailed comparative results.
Domain = 1 (Single Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 31816260273.6791 × 10 8 4.5719 × 10 8 1.0000 × 10 0 8.961
1 × 10 5 31917199504.4570 × 10 7 4.5053 × 10 7 1.0000 × 10 0 8.533
1 × 10 2 62036049372.1295 × 10 2 8.3458 × 10 3 8.0375 × 10 1 30.319
Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 2412026152.6853 × 10 9 3.0108 × 10 9 1.0000 × 10 0 6.66417
1 × 10 5 3418083705.1807 × 10 9 2.9732 × 10 9 1.0000 × 10 0 9.19804
1 × 10 2 6545038411.6800 × 10 2 3.4354 × 10 2 7.2672 × 10 1 24.66548
Table 7. Case 4—25-bar plane truss: detailed comparative results.
Table 7. Case 4—25-bar plane truss: detailed comparative results.
Domain = 1 (Single Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 31917124212.9086 × 10 8 2.2918 × 10 8 1.0000 × 10 0 7.838
1 × 10 5 41922874355.7997 × 10 7 5.1730 × 10 7 1.0000 × 10 0 11.927
1 × 10 2 61934298032.1660 × 10 2 3.2157 × 10 2 8.3604 × 10 1 29.565
Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 251501831.8233 × 10 9 1.7689 × 10 9 1.0000 × 10 0 7.11437
1 × 10 5 5537598031.4527 × 10 9 1.5128 × 10 9 9.9998 × 10 1 18.35036
1 × 10 2 6436063681.4292 × 10 2 3.3215 × 10 2 6.8316 × 10 1 18.32772
Table 8. Case 1–4—25-bar space truss: detailed comparative results.
Table 8. Case 1–4—25-bar space truss: detailed comparative results.
Case 1; Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
239012083.4760 × 10 13 3.4760 × 10 13 1.0000 × 10 0 4.946
Case 2; Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 239012362.9885 × 10 13 2.9885 × 10 13 1.0000 × 10 0 3.650
1 × 10 5 239019752.7029 × 10 13 2.7029 × 10 13 1.0000 × 10 0 3.553
1 × 10 2 239013698.5238 × 10 12 8.7200 × 10 12 1.0000 × 10 0 3.979
Case 3; Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 2618016307.4887 × 10 10 7.3686 × 10 10 1.0000 × 10 0 9.586
1 × 10 5 3627097531.3174 × 10 9 1.3358 × 10 9 1.0000 × 10 0 13.595
1 × 10 2 64360117.4059 × 10 3 1.1727 × 10 2 7.4215 × 10 1 19.265
Case 4; Domain = 5 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 3522594783.9648 × 10 10 4.1852 × 10 10 1.0000 × 10 0 9.269
1 × 10 5 5537558023.0698 × 10 9 3.1014 × 10 9 1.0000 × 10 0 14.580
1 × 10 2 64360109.9744 × 10 3 2.9379 × 10 2 7.2720 × 10 1 16.755
Table 9. Case 1–4—6-by-6 grid dome: axial force prediction results.
Table 9. Case 1–4—6-by-6 grid dome: axial force prediction results.
Case 1; Domain = 32 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
56288047391.4219 × 10 7 1.4219 × 10 7 1.0000 × 10 0 196.283
Case 2; Domain = 32 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 46230476361.4032 × 10 7 1.4032 × 10 7 1.0000 × 10 0 118.481
1 × 10 5 4623045701.6932 × 10 7 1.6932 × 10 7 1.0000 × 10 0 115.218
Case 3; Domain = 32 (Separated Domain)
β η QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 66345645991.0637 × 10 6 6.3044 × 10 7 9.9996 × 10 1 316.643
1 × 10 5 66345681981.0364 × 10 6 1.3578 × 10 7 9.9965 × 10 1 316.336
Case 4; Domain = 32 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 56288068096.8858 × 10 7 2.2918 × 10 7 9.9997 × 10 1 164.747
1 × 10 5 56288048181.6210 × 10 6 3.7537 × 10 7 9.9962 × 10 1 179.735
Table 10. Case 1–4—6-by-6 grid dome: displacement prediction results.
Table 10. Case 1–4—6-by-6 grid dome: displacement prediction results.
Case 1; Domain = 27 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
3614587622.5581 × 10 9 6.9068 × 10 8 1.0000 × 10 0 86.564
Case 2; Domain = 27 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 3614589461.9097 × 10 10 5.1562 × 10 9 1.0000 × 10 0 68.784
1 × 10 5 56243096744.6265 × 10 9 1.2491 × 10 7 1.0000 × 10 0 198.522
Case 3; Domain = 27 (Separated Domain)
β η QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 6629166622.5805 × 10 9 5.2583 × 10 8 1.0000 × 10 0 254.766
1 × 10 5 46194456334.3679 × 10 8 7.2066 × 10 7 1.0000 × 10 0 127.549
Case 4; Domain = 27 (Separated Domain)
β QubitsLayersTotal ParamsEpoch L 2 Error | L t | R 2 ScoreTime (s)
1 × 10 6 6629166593.4688 × 10 9 8.0135 × 10 8 1.0000 × 10 0 197.081
1 × 10 5 36140411931.0672 × 10 7 2.3228 × 10 6 1.0000 × 10 0 72.152
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ha, H.; Shon, S.; Lee, S. Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints. Biomimetics 2025, 10, 407. https://doi.org/10.3390/biomimetics10060407

AMA Style

Ha H, Shon S, Lee S. Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints. Biomimetics. 2025; 10(6):407. https://doi.org/10.3390/biomimetics10060407

Chicago/Turabian Style

Ha, Hyeonju, Sudeok Shon, and Seungjae Lee. 2025. "Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints" Biomimetics 10, no. 6: 407. https://doi.org/10.3390/biomimetics10060407

APA Style

Ha, H., Shon, S., & Lee, S. (2025). Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints. Biomimetics, 10(6), 407. https://doi.org/10.3390/biomimetics10060407

Article Metrics

Back to TopTop