Abstract
Structural dynamics analysis is essential for predicting the behavior of engineering systems under dynamic forces. This study presents a hybrid framework that combines analytical modeling, machine learning, and optimization techniques to enhance the accuracy and efficiency of dynamic response predictions for Single-Degree-of-Freedom (SDOF) systems subjected to harmonic excitation. Utilizing a classical spring–mass–damper model, Fourier decomposition is applied to derive transient and steady-state responses, highlighting the effects of damping, resonance, and excitation frequency. To overcome the uncertainties and limitations of traditional models, Extended Kalman Filters (EKFs) and Physics-Informed Neural Networks (PINNs) are incorporated, enabling precise parameter estimation even with sparse and noisy measurements. This paper uses Adam followed by LBFGS to improve accuracy while limiting runtime. Numerical experiments using 1000 time samples with a 0.01 s sampling interval demonstrate that the proposed PINN model achieves a displacement MSE of 0.0328, while the Eurocode 8 response-spectrum estimation yields 0.047, illustrating improved predictive performance under noisy conditions and biased initial guesses. Although the present study focuses on a linear SDOF system under harmonic excitation, it establishes a conceptual foundation for adaptive dynamic modeling that can be extended to performance-based seismic design and to future calibration of Eurocode 8. The harmonic framework isolates the fundamental mechanisms of amplitude modulation and damping adaptation, providing a controlled environment for validating the proposed PINN–EKF approach before its application to transient seismic inputs. Controlled-variable analyses further demonstrate that key dynamic parameters can be estimated with relative errors below 1%—specifically 0.985% for damping, 0.391% for excitation amplitude, and 0.692% for excitation frequency—highlighting suitability for real-time diagnostics, vibration-sensitive infrastructure, and data-driven design optimization. This research deepens our understanding of vibratory behavior and supports future developments in smart monitoring, adaptive control, resilient design, and structural code modernization.
1. Introduction
Since the early 2000s, civil engineering has increasingly used soft computing techniques—artificial neural networks, genetic algorithms, fuzzy logic, and wavelets—for numerical problems [1]. Kicinger et al. [2] analyzed evolutionary computation for structural design, particularly in topological optimization, while Saka and Geem [3] focused on mathematical modeling to optimize steel frame structures. Recent advances in high-performance computing have further expanded the applications of earthquake engineering by enabling the efficient processing of large datasets. Xie et al. [4] reviewed developments in seismic hazard analysis, system identification, damage detection, fragility assessment, and structural control. Within this context, machine learning (ML) techniques now offer data-driven approaches to modeling and predicting structural dynamic responses, with numerous studies demonstrating their effectiveness for linear, nonlinear, and soil–structure interaction behaviors [5,6,7,8,9,10,11,12,13].
Oliver Richard de Lautour et al. [5] proposed an artificial neural network (ANN) method to predict seismic damage in 2D reinforced concrete (RC) frames using extensive structural and ground motion parameters. Trained on nonlinear FEM simulations, the ANN accurately mapped input features to damage indices, outperforming traditional vulnerability curves. Byung Kwan Oh et al. [6] developed a convolutional neural network (CNN) to predict displacement responses from acceleration histories, validated on the ASCE benchmark and RC frame experiments, achieving high accuracy even with overlapping datasets. Sadjad Gharehbaghi et al. [7] compared an ANN and wavelet-weighted least squares support vector machines (WWLSSVMs) for predicting inelastic seismic responses of an 18-story RC frame, finding that the ANN was slightly more accurate and robust with limited training data. Similarly, Pritam Hait et al. [8] combined the Park–Ang method with ANN to evaluate seismic damage in low-rise RC buildings, introducing a simplified global damage index (GDI) that the ANN efficiently predicted with reduced error. Taeyong Kim et al. [9] trained deep neural network (DNN) models on modified Bouc-Wen-Baber-Noori (m-BWBN) data, which capture degradation and pinching effects, demonstrating superior accuracy over regression methods in predicting peak seismic responses.
ML methods have also proven effective for seismic signal characterization [14,15] and structural performance prediction using neural networks [16,17], genetic programming, tree-based models, and hybrid approaches [18,19,20]. These studies highlight ML’s versatility in structural dynamics, including vibration assessments, structural health monitoring, and predictive maintenance [21,22,23,24]. Traditional analytical methods—such as solving second-order differential equations—offer elegant solutions under ideal conditions but struggle with real-world complexity, where uncertainty in initial conditions, noisy or sparse measurements, and parameter variability degrade predictive accuracy.
This study employed PINNs because they embed the governing constraints, thereby remaining data-efficient even with sparse, noisy observations. Unlike black-box predictors, PINNs embed governing physical laws directly into the learning process, enabling accurate, data-efficient parameter estimation even under incomplete or noisy datasets. Such capability is critical for real-time monitoring, adaptivity, and robustness—especially in analyzing Single-Degree-of-Freedom (SDOF) systems near resonance. Moreover, ML provides scalable solutions for parameter identification, adapting beyond fixed-code expressions (e.g., Eurocode 8) to evolving system dynamics. The combination of ML and optimization creates a robust framework for structural dynamics analysis, enhancing the precision of vibration assessments and response predictions.
Recent work has emphasized hybrid strategies that blend physical modeling with data-driven methods. The integration of physics-based models with intelligent algorithms has advanced complex modeling, classification, and parameter estimation [25,26,27,28,29,30,31,32]. For instance, predicting structural responses in RC structures often depends on accurately modeling bond behavior [33]. Amini Pishro et al. [25] showed that combining ANNs with multiple linear regression improves predictive accuracy compared to traditional methods. Building on these efforts, hybrid ML approaches [28] have enhanced predictions across various loading conditions by merging data-driven techniques with physical insights. Similarly, PINNs have successfully simulated bond behavior in ultra-high-performance concrete (UHPC) under monotonic loading [29], demonstrating the value of incorporating physical constraints into ML—a strategy central to this study’s PINN-based modeling.
Machine learning has also been applied to predict structural behavior under combined loads, such as failure mechanisms in RC beams strengthened with fiber-reinforced polymers (FRPs) under torsion, shear, and bending [30,31]. Additionally, hybrid ML methods have advanced system-level modeling and optimization in spatial–temporal systems, including rail transit station classification [26,27,32]. These examples highlight ML’s potential for addressing multi-parameter, nonlinear structural responses beyond the scope of traditional tools.
Traditional approaches for analyzing harmonic excitation—solving second-order differential equations—remain essential for understanding transient and steady-state responses but are limited by uncertainties and sensitivity to initial conditions [34,35]. To overcome these limitations, recent studies have analyzed nonlinear responses in SDOF systems, focusing on different structural types and seismic protection systems [36,37,38,39]. N. Asgarkhani et al. [36] trained ML-based models on nonlinear time history and incremental dynamic analyses of Buckling-Restrained Brace Frames (BRBFs) to predict inter-story drift (ID) and residual inter-story drift (RID) with up to 98.7% accuracy, outperforming conservative estimates like FEMA P-58. Davit Shahnazaryan et al. [37] developed Decision Tree and XGBoost models to improve nonlinear response prediction using next-generation intensity measures, such as average spectral acceleration (Sa_avg), and outperformed empirical methods in predicting collapse behavior across periods. Payán O. et al. [38] used deep learning to predict the seismic responses of RC buildings, focusing on maximum inter-story drifts. Their models effectively estimated ductility and hysteretic energy, although careful tuning was required to avoid overfitting. Similarly, Hoang D. Nguyen et al. [39] compared six ML methods, including ANN and random forest (RF), to predict peak lateral displacements of seismic isolation systems, with RF achieving R2 values of 0.9930 (training) and 0.9498 (testing) using 234,000 OpenSees-generated data points. Practical applications, including a GUI tool, confirmed the model’s utility at the design level.
ANN-based methods have repeatedly proven reliable for rapid predictions of SDOF system responses [40,41,42]. Other algorithms—such as RF, eXtreme Gradient Boosting, and Stochastic Gradient Boosting—have also demonstrated success in seismic modeling [43]. Gentile and Galasso [44] further extended this by employing Gaussian Process Regression for probabilistic seismic demand modeling, highlighting the versatility of ML in capturing complex dynamics. Accurate characterization of seismic signals remains essential for improving response predictions [45,46], yet many ML studies overlook key parameters that drive reliability.
These developments underscore the ongoing need to modernize seismic design codes, such as Eurocode 8 [47,48,49](BS EN 1998), which are evolving toward performance-based and resilience-oriented frameworks. Eurocode 8 [47,48] establishes prescriptive formulations for seismic design, defining elastic response spectra with a fixed damping ratio of 5% and providing simplified expressions for structural dynamic amplification through static and modal combination procedures [47,48,49]. These provisions are efficient for conventional elastic design but rely on predefined spectral shapes and constant damping assumptions that may not accurately capture transient or nonlinear system behavior under variable excitation conditions.
In contrast, the adaptive Dynamic Magnification Factor (DMF) approach proposed in this study dynamically computes amplification from the instantaneous system response using a coupled PINN and EKF framework. This enables continuous updating of stiffness, damping, and response parameters, thereby extending the Eurocode 8 concept of spectral amplification toward a data-driven, real-time formulation suitable for modern performance-based design and structural health monitoring.
In this study, Eurocode 8 is referenced as a contextual motivation for adaptive dynamic analysis rather than as a direct performance benchmark. The present linear-harmonic SDOF formulation provides an isolated, well-understood setting for validating the proposed PINN–EKF framework prior to extending it to transient seismic loading conditions. More direct comparison with Eurocode 8 response spectrum provisions will be addressed in future work.
The proposed adaptive framework also builds on prior experimental and numerical investigations of structural interfaces. Recent studies have demonstrated how structural mechanisms can significantly influence stiffness and energy dissipation under cyclic or dynamic loading conditions [28]. Similar new findings highlight the potential for composite systems to achieve enhanced ductility and load-bearing capacity through improved material–interface performance [29]. These insights reinforce the motivation for developing adaptive analytical approaches capable of considering material-level nonlinearities and interface effects into global dynamic models.
Integrating partial differential equations with ML has already proven effective for modeling multi-scale systems beyond the limits of analytical methods [50]. In contrast, hybrid methods such as the Extended Kalman Filter (EKF) and PINNs provide promising bridges between traditional models and data-driven approaches. EKF enables real-time state estimation in noisy environments, and PINNs embed physical constraints to ensure accurate predictions even with limited data. This integration is particularly valuable for nonlinear dynamic systems, such as SDOF structures under harmonic excitation. These advances suggest that ML-driven structural dynamics analysis can effectively capture nonlinear behavior and evolving design philosophies, supporting the transition toward next-generation Eurocode 8 standards [47,48,49]. The growing role of artificial intelligence in advancing design codes represents a transformative shift toward data-driven decision-making and automated optimization, enabling continuous refinement of structural standards such as Eurocode 8 through predictive analytics and real-time calibration with experimental data. In this context, machine learning serves as a powerful complement to established design frameworks by uncovering hidden patterns, improving predictive performance, and bridging the gap between empirical code formulations and modern computational intelligence [51].
This paper proposes a hybrid framework that combines classical structural dynamics with ML and optimization to analyze SDOF systems under harmonic excitation. We address the limitations of Eurocode-style steady-state formulas for handling noisy, sparse measurements by proposing a physics-informed ML framework that estimates SDOF parameters in real time and improves displacement prediction under resonance. Analytical formulations are combined with EKF and PINNs to evaluate the effects of key parameters, such as damping ratio and frequency ratio, on the dynamic response. Numerical simulations explore parameter uncertainty and measurement noise, using EKF and PINNs for parameter identification and prediction to leverage both data assimilation and physics-guided learning.
Traditional approaches in structural dynamics often exhibit high sensitivity to initial conditions and parameter uncertainties, perform poorly with sparse or noisy data, and lack adaptivity for real-time monitoring. Simplified Eurocode 8 assumptions, while effective for elastic design, may fail to capture transient, nonlinear, or site-specific effects observed in real structures. Recent technical reports and code-development discussions emphasize the need to incorporate nonlinear response modeling, refined soil–structure interaction [52], and improved representation of site-dependent spectra. These evolving directions highlight the growing demand for analytical and computational tools capable of handling parameter uncertainty, data assimilation, and adaptive modeling under dynamic loading. The proposed hybrid framework directly addresses these challenges by integrating analytical formulations, PINNs, the EKF, and optimization algorithms to improve SDOF response prediction under harmonic and seismic excitations. This adaptive, data-driven methodology strengthens Eurocode-based design analysis and aligns with the broader modernization trends in seismic performance assessment.
While Eurocode 8 provides prescriptive formulations for seismic design based on elastic response spectra, the present study focuses on the fundamental dynamic mechanisms represented by a linear SDOF system under harmonic excitation. This simplified setting allows the proposed PINN–EKF framework to be validated under controlled conditions before extending it to transient and nonlinear seismic inputs in future work.
To demonstrate the efficiency of the proposed methodology, Section 2 introduces the governing equations of the structural dynamic system and formulates the SDOF model used throughout this study. The subsequent sections describe the implementation of the PINN and EKF algorithms, followed by validation and comparative analysis with classical solutions and Eurocode-based response predictions.
2. Structural Dynamic System
2.1. Harmonic Excitation
Many natural forces can be approximately represented by a series of harmonic forces, which is known as the Fourier Decomposition. A harmonic force is a simple mathematical representation of a periodic force. Figure 1 presents a spring-mass-damper system subjected to a harmonic force.
Figure 1.
SDOF dynamic system subjected to a harmonic force (sinusoidally oscillating magnitude).
By examining the equation of motion for the spring-mass-damper system, its response to an external harmonic forcing function can be analyzed. This analysis reveals how the system behaves under periodic excitation, capturing both the transient response, which fades over time due to damping, and the steady-state response, which persists with characteristics dependent on the system’s parameters and the forcing frequency. The system is subjected to a sinusoidal force of the form , where represents the amplitude of the applied force, and denotes the angular frequency of the forcing function.
The equation of motion for the spring-mass-damper system with harmonic forcing can be expressed as
where represents the mass of the system, indicates the damping coefficient, is the spring constant. Moreover, denotes the displacement of the mass as a function of time, and are the first- and second-time derivatives of , representing velocity and acceleration, respectively.
Equation (1) represents the dynamic response of a system subjected to an external harmonic force. Its solution generally comprises two components: the transient response, which diminishes over time due to damping, and is an oscillation at the damped natural frequency , and the steady-state response, which remains as long as the harmonic force is applied. The steady-state solution typically follows a sinusoidal pattern with the same frequency as the forcing function but includes a phase shift and an amplitude determined by the system’s parameters and the forcing frequency .
The analysis of this system is crucial in understanding phenomena such as resonance, where the amplitude of the system’s response becomes significantly large when the forcing frequency approaches the system’s natural frequency . Damping plays a critical role in limiting the amplitude of the response, especially near resonance.
The inclusion of the harmonic forcing function introduces a dynamic driving mechanism that significantly influences the system’s behavior, leading to important insights into the interplay between external forces and the system’s inherent properties.
In Equation (1) denotes a non-homogeneous second-order differential equation. The objective now is to solve this differential equation to derive an expression for displacement as a function of time, . Since this is a non-homogeneous second-order differential equation, its solution consists of two components: the complementary solution, which corresponds to the solution of the associated homogeneous equation, as presented in Equation (2), and the particular solution, which accounts for the effects of the non-homogeneous term.
The damping coefficient is related to the damping ratio through the expression , where is the undamped natural circular frequency of the system. Equation (3) represents the general solution of a non-homogeneous second-order differential equation describing a system’s response. It is composed of two parts: the complementary solution and the solution . The complementary solution, , is the solution to the homogeneous equation when the external forcing term is absent and represents the system’s natural response, also known as the free vibration or transient response. This part depends on the system’s initial conditions and damping, and it gradually decays over time. The particular solution, , accounts for the effects of the external forcing function, representing the forced vibration or steady-state response, which persists as long as the external force is applied. Together, these two components fully describe the system’s displacement over time, the transient response gradually diminishes, and the steady-state response determines the long-term behavior of the system under external excitation.
Given that the right-hand side of the equation is a sinusoidal function, the trial solution is chosen to be a generalized sinusoidal form, as shown in Equation (4). This assumes that the system’s response will also follow a sinusoidal pattern but with potentially different amplitude and phase. To proceed, this trial solution was differentiated, and the resulting expressions were then substituted back into the original differential equation. This substitution process is described by Equation (5), which uses the trial solution and its derivatives to determine the unknown parameters. Through this method, the specific amplitude and phase of the system’s steady-state response under the given sinusoidal forcing function were determined.
Applying Equations (4) and (5) in Equation (1) will result in
or
Expressions for the constants A and B can be derived by grouping the sine terms, as shown in Equation (8), and the cosine terms, as shown in Equation (9). This allows us to isolate the unknown constants by equating the coefficients of the sine and cosine terms on both sides of the equation.
Therefore, two equations with two unknowns are obtained, and simultaneous equations can be used to solve for A and B, as shown in Equations (10) and (11).
Applying the constants and in Equation (4) will result in
Considering Equation (4) and the complementary and particular solutions, the complete expression for the response of the Single Degree of Freedom (SDOF) system under harmonic excitation is obtained as Equation (13), which includes the transient and steady-state responses.
The dimensionless response equation offers an alternative representation of the complete response, as described in Equation (13). This formulation utilizes dimensionless ratios to simplify analysis and facilitate generalization across diverse systems. One key dimensionless parameter is the frequency ratio, denoted , which is defined as the ratio of the excitation frequency to the system’s natural frequency expressed as Equation (14).
By introducing this dimensionless parameter, the response equation can be rewritten in a form that highlights the influence of excitation frequency relative to the system’s natural frequency. This approach facilitates comparisons between different systems and provides deeper insights into resonance behavior, damping effects, and the overall dynamic characteristics of the Single Degree of Freedom (SDOF) system under harmonic excitation. The steady-state response is often represented in dimensionless form. Applying the frequency ratio in Equation (12) yields
Therefore, the final response equation can be restated as presented in Equation (16).
2.2. Phase Angle
In the steady-state response given by Equation (15), the term can be replaced with for simplification. Additionally, the expression can be designated as Component 1, while is referred to as Component 2. This decomposition allows for a more straightforward interpretation of the system’s response by distinguishing between contributions from different dynamic effects. Therefore, the steady-state response includes two components. Component 1 is in phase with the applied harmonic force, while component 2 is 90 degrees out of phase.
Since appears in both the harmonic forcing function and the component 1, expressed as , it follows that these terms share the same phase and oscillate at the same frequency. This indicates that the response component is directly influenced by remains synchronized with the external forcing function, reinforcing the system’s steady-state behavior.
Component 2 of the steady-state response arises due to the effect of damping, which influences the system’s motion by introducing a phase shift relative to the external forcing function. This component, represented by accounts for energy dissipation in the system and contributes to the overall response by modifying both the amplitude and phase characteristics. The term is 90 degrees out of phase, with , meaning that Components 1 and 2 are orthogonal to each other. As a result, these components can be represented as two separate vectors in a phase plane, with a 90° phase difference. This vector representation, as shown in Figure 2, facilitates a clearer understanding of how damping affects the system’s steady-state response by introducing a component that is phase-shifted relative to the external forcing function.
Figure 2.
Rotating Vectors Representing Steady-State Response.
According to Figure 2, the amplitude of is the amplitude of the steady-state response. The red vector R represents the actual steady-state response and is derived as the vector sum of the two black component vectors. Its magnitude at any given moment is determined by projecting it onto the real axis.
Since the applied force is in phase with the leading response vector of magnitude , the phase angle indicates how much the system’s response (red vector) lags the applied harmonic force.
As the damping level increases, the trailing vector, with a magnitude of , becomes longer. As a result, the overall response vector lags further behind the applied force, increasing the phase shift.
The amplitude of the steady-state response is the magnitude of the total response vector , as expressed in Equation (17).
Equation (17) can be restated by extending :
The phase lag, denoted as , represents the angle between the applied force and the system’s response, as presented in Equation (19). In other words, quantifies the phase shift between the external forcing function and the resulting motion of the system. This phase difference arises from damping and system dynamics, which affect how the response lags behind the applied force. A higher damping ratio increases the phase lag, meaning the system takes longer to reach its peak displacement relative to the driving force. Therefore, the steady-state response lags the applied harmonic force by phase angle .
The black line in Figure 3 illustrates the waveform representing the frequency and phase of the applied force, , and the red graph presents the corresponding waveform showing the phase lag of degrees between the force and the system’s response, . By understanding the nature of the applied force or the specific harmonic force acting on the system, the steady-state response can be fully characterized, including the frequency and phase relationships between the applied force and the resulting motion. This allows for a comprehensive understanding of how the system behaves under harmonic excitation.
Figure 3.
Phase lag between the applied harmonic force and the steady-state response.
2.3. Dynamic Magnification Factor (DMF) and Resonance
The Dynamic Magnification Factor (DMF) is a crucial parameter in structural dynamics, defined as the ratio between the amplitude of the steady-state dynamic response and the deflection caused by a static load of the same magnitude. This factor serves as a powerful tool for assessing the extent to which dynamic loading amplifies a structure’s response compared to its static counterpart. It provides engineers with insights into the influence of oscillatory forces, helping them predict potential resonance effects and ensure the structural integrity and serviceability of engineering systems. Mathematically, it is expressed as
Considering Equations (18) and (20), the DMF can be stated as:
The DMF and the phase angle are key parameters that define the behavior of a structure or mechanical system under harmonic excitation. Both depend on the forcing frequency , the system’s natural frequency , and the level of damping . The DMF quantifies the amplification of displacement caused by dynamic effects, indicating how much a harmonic force magnifies the structural response beyond its static deflection. The phase angle represents the lag between the applied force and the system’s response. Together, these parameters fully characterize the steady-state response when the harmonic excitation’s frequency and amplitude are known.
Resonance occurs when a system is subjected to a periodic force whose frequency matches its natural frequency, leading to a significant increase in vibration amplitude. This occurs because the external force continuously supplies energy in sync with the system’s natural oscillations, leading to a buildup of motion that can cause structural damage or failure if not adequately controlled.
Figure 4 depicts the relationship between the frequency ratio (β) and the dynamic magnification factor (DMF) for various damping levels, ranging from 3% to 50%. The x-axis represents the frequency ratio β (dimensionless), while the y-axis shows the DMF (dimensionless).
Figure 4.
Variation in Dynamic Magnification Factor (DMF) (ξ = 0.03–0.50, kg, /m, with increment ).
The graph demonstrates the typical behavior of a damped system under harmonic excitation. As the frequency ratio approaches 1 (resonance), the DMF increases sharply, reaching its maximum value. This peak indicates the resonance condition, where the system experiences maximum oscillation amplitude relative to the applied force.
For lower damping values, particularly 3% (blue curve) and 5% (orange curve), the peak is significantly higher, exceeding 16, and the curve exhibits a pronounced sharpness around β = 1. This sharp peak reflects the high amplification of oscillations at resonance when damping is low. As damping increases, the peak value decreases, and the curve flattens. For 10% damping (green curve), the peak is noticeably lower, reaching just over 10, and the response becomes less sensitive to frequency variations around resonance. With 20% damping (red curve) and 50% damping (purple curve), the peak continues to diminish, and the DMF approaches a more moderate level across the frequency range, illustrating a significant reduction in amplitude amplification as damping increases.
This trend highlights the role of damping in mitigating the system’s resonant response. High damping reduces the amplitude of oscillations at resonance and broadens the range of frequencies over which the system experiences lower amplification. Therefore, systems with higher damping coefficients are less likely to experience excessive oscillations or damage due to resonance, making damping an essential design factor for structures subjected to dynamic loads.
The DMF helps quantify this effect by indicating how much the system’s response is amplified under dynamic loading. As the excitation frequency approaches the natural frequency, the DMF reaches its maximum, highlighting the risk of excessive displacement. Additionally, the phase angle shifts noticeably, moving from an in-phase response at low frequencies to an out-of-phase response at higher frequencies.
Understanding the relationship between the DMF, phase angle, and system parameters provides critical insights into resonance, energy dissipation, and overall vibratory behavior. As the forcing frequency approaches the natural frequency, the DMF reaches its peak, leading to maximum displacement, while the phase angle shifts from in-phase behavior at low frequencies to out-of-phase behavior at higher frequencies. By quantifying dynamic amplification, the DMF enables engineers to assess the effects of dynamic loading and anticipate potential resonance issues. To mitigate excessive vibrations and minimize resonance risks, structural design strategies can incorporate damping mechanisms, modify system stiffness, or adjust excitation conditions to achieve optimal performance.
While Eurocode 8 defines seismic design spectra for transient ground motions, the present study adopts a linear single-degree-of-freedom (SDOF) system under harmonic excitation as a conceptual and computational benchmark. This simplification enables systematic validation of the proposed hybrid PINN–EKF framework under well-controlled dynamic conditions, where the analytical solution is known and the effects of parameter uncertainty and noise can be rigorously assessed. The insights gained from this harmonic analysis provide a foundational step toward extending the method to nonlinear and multi-degree-of-freedom (MDOF) systems driven by recorded or spectrum-compatible seismic excitations, thereby supporting future code calibration and performance-based design developments.
3. Conventional Numerical Analysis of Structural Dynamics
In this section, structural dynamic principles are applied to analyze the response of a lightweight steel frame supporting heavy machinery with a total mass of m = 15,000 kg, assuming the frame’s self-weight is negligible. An experimental impact test estimates the inherent structural damping as ξ = 0.03. The structure is laterally constrained, preventing twisting and significant vertical movement, allowing it to be modeled as a single-degree-of-freedom (SDOF) system under harmonic excitation. A load test reveals that a lateral force of P = 2000 N results in a lateral displacement of Δ = 7 mm. Given a harmonic force of magnitude P0 = 700 N and a forcing frequency of f = 0.9 Hz, the dynamic magnification factor and the phase shift between the applied force and steady-state response were determined. Additionally, the structural response over t ∈ [0, 60] s was analyzed, and if the harmonic force is removed at t = 10 s, the mass position at t = 45 s was determined.
In the next section, machine learning and optimization algorithms are introduced to predict and refine the system’s hyperparameters, and their performance is compared with the structural dynamic approach to evaluate their effectiveness in analyzing and optimizing the structural response.
3.1. DMF and Phase Shift
Figure 5 is presented by applying Equation (21) for the dynamic magnification factor (DMF) and Equation (19) for the phase shift between the applied force and the system’s response.
Figure 5.
Phase Angle Response of the Damped SDOF System under Harmonic Excitation (, , , , with Δβ = 0.01).
Figure 5 illustrates the Response Phase Angle as a function of the frequency ratio (β) in a harmonically excited SDOF system, where the phase angle represents the lag between the applied force and the system’s response. It illustrates the expected phase behavior of a damped SDOF system under harmonic excitation, with a key characteristic being the transition from in-phase to out-of-phase response. Beyond the resonance region, the system’s response significantly lags the excitation.
At low-frequency ratios (β < 1.0), the phase angle remains close to 0°, indicating that the system’s response is nearly in phase with the applied force. As the frequency ratio approaches β = 1.0, a rapid transition occurs, marking the onset of the resonance region. In this range, the phase angle shifts dramatically from near 0° to 180°, indicating that the system transitions from being in phase with the applied force to almost entirely out of phase.
Beyond the resonance region (β > 1.0), the phase angle stabilizes at approximately 180°, indicating that the response consistently lags the applied force by half a cycle. The critical point, marked at β ≈ 1.296, highlights a phase difference of approximately 173.5° (3.028 radians), corresponding to a dynamic magnification factor (DMF) of 1.463 and a steady-state dynamic amplitude of 0.0036 m. The static deflection, obtained by dividing the force magnitude by the stiffness, is 0.0025 m. Table 1 summarizes the results presented above for the damped SDOF system.
Table 1.
Phase Angle and Dynamic Response of the Damped SDOF System.
3.2. Structural Response
Equation (16) was used to calculate the system’s response to the harmonic loading. Therefore, the constants and within the transient component must first be determined. This is achieved by applying the initial conditions. At , both the position and velocity are zero. To apply the second boundary condition on velocity, the expression for must first be differentiated. For simplicity, some substitutions will be made to make the equation more manageable.
Applying Equations (22) and (23), Equation (16) was restated as follows:
Considering the following assumptions
while
Each of the four functions of can be differentiated individually. The functions and can be differentiated using the product rule, while and can be differentiated using the chain rule. Therefore
Therefore, the velocity can be formulated as presented in Equation (31).
Considering the equations of and , and applying the initial condition , the unknown hyperparameters will be
As shown in Figure 6, the transient response gradually diminishes over time due to damping, allowing the system’s behavior to be effectively characterized by the steady-state response. Once the transient effects dissipate, the system stabilizes into periodic oscillations dictated by its natural frequency and external forcing, making the steady-state response the primary focus for analysis and practical applications.
Figure 6.
Components of Displacement-Time History: Transient and Steady-State Responses.
The displacement-time history in the figure captures both transient and steady-state components. Initially, the transient response exhibits oscillations with an amplitude of approximately ±0.004 m, which progressively decay as energy dissipates. By t ≈ 20 s, the transient effects become negligible, and the system transitions into steady-state behavior.
In this steady-state phase, the displacement oscillates with a nearly constant amplitude of about ±0.0025 m, following a sinusoidal pattern. The static displacement, representing the equilibrium position, is also plotted, along with the steady-state amplitude, which marks the stabilization of oscillations.
This response confirms that after an initial disturbance, the system undergoes damped oscillatory motion before settling into a stable vibration pattern. The well-defined steady-state oscillations and the decay of the transient component align with theoretical expectations, illustrating the balance between external forcing and damping.
Figure 7 illustrates the combined displacement-time history of a dynamic system, capturing both the transient and steady-state responses. Initially, the displacement exhibits high-amplitude oscillations, reaching approximately ±0.0065 m, which gradually diminish over time due to damping. This transient phase, evident in the first 20 s, is characterized by a reduction in oscillation magnitude as energy dissipates from the system.
Figure 7.
Displacement-Time History: Transient and Steady-State Response.
Beyond t ≈ 20 s, the transient response becomes negligible, and the system settles into a periodic steady-state oscillation. In this phase, the displacement stabilizes within a consistent amplitude range of approximately ±0.0025 m. The oscillatory pattern in the steady-state response aligns with the system’s natural frequency and the influence of external excitation forces.
The observed behavior aligns with theoretical expectations: an initial disturbance triggers a damped transient response, followed by a steady-state vibration governed by the balance between external forcing and damping. The results confirm that after an initial phase of energy dissipation, the system exhibits predictable harmonic motion with a stable amplitude, making the steady-state response the primary focus for long-term analysis.
3.3. Free Vibration Response
The harmonic excitation is removed at t = 10 s, leading to a change in the system’s dynamic response. Initially, the system exhibits forced vibration behavior due to the applied harmonic force, characterized by oscillations influenced by both transient and steady-state responses. Once the external force is removed, the system transitions into free vibration, where its natural frequency and damping characteristics govern its motion.
To determine the position of the mass at t = 45 s, the displacement function should be analyzed with respect to the governing differential equation of motion. Before the force is removed, the displacement follows a combination of transient and steady-state oscillations. After t = 10 s, the system undergoes damped free vibration, during which the displacement gradually diminishes due to damping. By evaluating the displacement function at t = 45 s, the precise position of the mass can be determined, reflecting the long-term behavior of the system in the absence of external excitation. The numerical result for the displacement at this instant would depend on the system parameters, including mass, damping coefficient, and natural frequency.
The displacement and velocity values obtained in the previous step now serve as the initial conditions for determining the free vibration response at t = 45 s. At this point, our focus shifts solely to the system’s transient response. It is no longer a concern with steady-state conditions but instead with how the system behaves over time, starting from the new initial conditions.
To proceed, it is necessary to determine the constants and that characterize the system’s transient response. These constants depend on the initial displacement and velocity, which have been established in the previous step. As a result, the general expressions for displacement and velocity were restated, incorporating the new initial conditions, to solve for and . By doing so, we will be able to fully characterize the system’s motion at any given time, particularly during the transient phase, and gain insights into its dynamic behavior at t = 45 s. By calculating and applying the boundary conditions and m/s to both expressions, we obtain two simultaneous equations.
Expressing the results in matrix form yields
Solving Equation (37), we obtain and . By calculating the equations for free vibration at s, the response magnitude is m, as presented by the red line in Figure 8.
Figure 8.
Transient Displacement Response of the System (10–45 s).
Figure 8 shows the displacement-time history of the system from 10 to 45 s, illustrating its transient response. The displacement is plotted on the vertical axis in meters, while time is represented on the horizontal axis in seconds. The system undergoes oscillations with progressively decreasing amplitude, indicating the damping effect over time. Initially, there is a significant peak around 10 s, where the displacement reaches approximately 0.003 m. The displacement then decreases with each subsequent peak and trough, showing an oscillatory pattern. By the end of the graph at 45 s, the oscillations have nearly damped out, and the displacement approaches zero, indicating that the system has stabilized. According to the analysis above, the displacement response at t = 45 s is −0.00003 m. This final displacement value is indicated by the red line in the graph, which shows the system’s near-zero displacement at this time, confirming that the system’s oscillations have significantly damped and that the transient phase has ended.
4. Data Assimilation Model for Fitting Displacement Curves
In the preceding analyses, the numerical solution of the single-degree-of-freedom (SDOF) harmonically excited system was obtained under the ideal assumption that all model parameters were known exactly. In real-world engineering, however, many of these parameters—such as mass, damping coefficient, stiffness, excitation amplitude, and frequency—must be estimated through measurements, which are inherently prone to noise and uncertainty. These discrepancies between measured and actual values result in deviations in the predicted displacement-time response from the ideal numerical solution. The closer the estimated parameters are to their exact values, the more accurately the system’s dynamic behavior can be reproduced, making parameter estimation a critical task in engineering applications.
Despite the existence of an analytical solution to the linear second-order vibration equation , its direct application in practice often encounters several challenges:
Parameter Uncertainty: Real-world parameters vary due to manufacturing tolerances, aging, and environmental effects, which reduces the reliability of purely analytical predictions.
- Sensitivity to Initial Conditions: Analytical solutions are highly dependent on accurate initial conditions, which are challenging to measure precisely in practice.
- Measurement Noise and Sparse Sampling: Observational data are typically noisy and collected at discrete intervals, limiting the resolution and reliability of direct comparisons with model predictions.
- Model Idealization Bias: Simplified models often overlook complex behaviors, such as nonlinear damping and stochastic external forces, thereby failing to capture the full dynamics of real structures.
To overcome these limitations, data assimilation techniques, such as the Extended Kalman Filter (EKF), are employed. These methods integrate numerical models with observational data, correcting predictions by accounting for parameter uncertainties and measurement errors. EKF enhances system identification by providing real-time updates, quantifying estimation uncertainty, and improving the accuracy and robustness of vibration monitoring and control systems. Even when analytical solutions are available, data assimilation remains a vital tool due to its adaptability and reliability in complex, uncertain environments.
4.1. The EKF Method
The Extended Kalman Filter (EKF) is an extension of the classical Kalman Filter (KF), specifically designed to address the state estimation problem in non-linear systems. While the standard Kalman Filter is limited to linear Gaussian systems, real-world engineering systems often exhibit nonlinear behaviors, rendering the direct application of the classical KF invalid.
The core idea of the EKF is to preserve the recursive structure of the Kalman Filter by locally linearizing the nonlinear system. This is achieved by performing a first-order Taylor expansion of the non-linear state and observation functions around the current state estimate, thereby neglecting higher-order terms. The partial derivatives of these non-linear functions with respect to the state variables form the Jacobian matrices, which serve as linear approximations of the system dynamics and observation models.
In the EKF framework, both the state transition and observation equations are linearized independently. The estimation process consists of two sequential steps. In the prediction step, the algorithm uses the system model and the previous state estimate to forecast the current state and its associated covariance. This predicted state is then refined in the update step, where actual observational data are incorporated to correct the prediction, yielding an improved, more accurate state estimate.
Through this approach, the EKF effectively extends the applicability of Kalman filtering to a broad class of non-linear dynamic systems, enabling more accurate and robust estimation under realistic conditions. The general non-linear system governed by the following state and observation equations is considered:
where
is the state vector at the moment
presents the control input at the moment
denotes the observation vector at the moment
stands for the non-linear state transfer function
shows the non-linear observation function
is the process noise with a covariance matrix
illustrates the observation noise with a covariance matrix
The core idea of the EKF is to linearize the nonlinear system using a Taylor expansion and then apply the prediction and update steps of Kalman filtering. The basic process of extended Kalman filtering can be summarized as follows:
Initialize the state and covariance:
where is the initial state estimate and presents the initial state covariance matrix. At each time step, the prediction step (state prediction and covariance prediction) is performed first, as presented in Equations (40) and (41).
where
is the a priori state estimate at moment
shows the a priori covariance matrix
is the Jacobi matrix of the state transfer function at
According to Equations (43)–(45), the update step (correcting the state estimates and covariances using the observed data) is then performed.
where
denotes the Kalman gain
is the a posteriori state estimate at moment
represents the a posteriori covariance matrix
shows the Jacobian matrix of the observation function at
For this case, we treat the parameters θ = {c, P0, ω} as fixed known quantities (c, P0, ω are used directly in the system). This means that the EKF only estimates the state vector . The observed displacement corresponds to the observation matrix . The vector form of the continuous-time state equation is
While the state Jacobian is stated as
The discretization is approximated by .
In this experiment, the initial state is , the initial covariance is considered as , the process noise is set to , and the observation noise variance is .
While the EKF extends the classical Kalman framework to nonlinear systems, its performance relies on several assumptions that may limit its applicability in complex structural dynamics problems. The EKF linearizes the nonlinear state-transition and observation models using a first-order Taylor expansion, which can introduce approximation errors when the system exhibits strong nonlinearities, such as yielding, hysteretic damping, or stiffness degradation. Moreover, EKF assumes that both process and measurement noise follow zero-mean Gaussian distributions with known covariances. In practice, seismic or experimental data often include non-Gaussian noise, bias, or outliers, which can degrade estimation accuracy or lead to divergence. Under such conditions, alternatives such as the Unscented Kalman Filter (UKF), Ensemble Kalman Filter (EnKF), or Particle Filter (PF) may offer improved robustness. In this study, the EKF is primarily applied to moderately nonlinear, single-degree-of-freedom systems where linearization remains valid. To further mitigate these limitations, the complementary use of the Physics-Informed Neural Network (PINN) enhances stability and accuracy under noisy or partially nonlinear conditions, providing a balanced hybrid estimation framework.
4.2. Numerical Experiment
The numerical experiments are based on the hypotheses developed in Section 3, as summarized in Table 2.
Table 2.
Model Parameters and Assumptions for Numerical Experiments.
A time segment is divided into 0.1 s intervals over the first 20 s, yielding 200 time steps. The true values of the solution at each time point are computed. Gaussian noise with a standard deviation (), following the distribution , is superimposed on the actual values. To simulate sparse and noisy observations, data are sampled every 3 time steps, yielding . The displacement image shown in Figure 9, fitted by the EKF, includes confidence intervals. The analysis was performed 30 times, and the averaged results were used to calculate the confidence intervals.
Figure 9.
(a) EKF-Based Estimation of Displacement and Velocity: Comparison with True Values and Noisy Observations. (b) Modified image with confidence intervals (, .
An Extended Kalman Filter (EKF) is employed to estimate the system states from noisy, sparse observations. For comparative analysis, the results are illustrated using graphical plots.
Figure 9 illustrates the performance of the EKF in estimating the dynamic response of the SDOF oscillator under noisy measurement conditions. Figure 9a presents the baseline estimation of displacement and velocity compared with the true response and discrete observation points, while Figure 9b shows the same analysis with added 95% confidence intervals (CIs) derived from the EKF covariance update.
In Figure 9a, the displacement plot (top) indicates that the EKF rapidly converges to the true response within the first few oscillation cycles, accurately reproducing both the amplitude and phase of the true displacement signal. The deviation between the estimated and true displacements remains below 3 × 10−4 m after convergence, even when the measurement noise amplitude is set to 10−3 m. Similarly, the velocity estimation (bottom plot) tracks the true signal with minimal lag and a maximum absolute deviation below 1.5 × 10−3 m/s. The alignment of the red-dashed line (EKF mean estimate) with the blue-solid line (true response) demonstrates the filter’s stability and consistency over the 20 s time window, despite sparse and noisy observations.
Figure 9b further incorporates the uncertainty bounds of the EKF estimates. The shaded regions represent the ±1.96σ (95%) confidence intervals computed from the estimated state covariance matrix, defined by process noise covariance Q = diag(1 × 10−3, 1 × 10−3) and measurement noise covariance R = 1 × 10−6. The narrow width of the confidence bands—approximately ±6 × 10−4 m for displacement and ±4 × 10−3 m/s for velocity—confirms the high reliability of the filter. The true responses remain consistently within these uncertainty bounds throughout the entire simulation, validating the EKF’s ability to provide both accurate and statistically consistent estimates.
Therefore, Figure 9 demonstrates that the EKF effectively reconstructs displacement and velocity histories from noisy and limited measurements, achieving high accuracy and well-bounded uncertainty. These results underscore its suitability for real-time dynamic state estimation in vibration-sensitive or monitoring-based applications, providing a reliable foundation for integration into the hybrid PINN–EKF framework.
In the displacement plot, the EKF estimate (red dashed line) aligns closely with the true displacement (solid blue line) throughout the 20 s window, demonstrating high estimation accuracy. Sparse, noisy observations (black crosses) sampled every 0.3 s are effectively assimilated by the filter, enabling it to reconstruct the displacement signal with minimal error after the initial transient phase.
During the early stage (t < 0.5 s), a noticeable deviation is observed due to the mismatch between the initial-state guess and the true value, compounded by limited observations. The maximum displacement error in this phase reaches approximately 0.01 m. However, the error decays rapidly within the first few observations, indicating the filter’s rapid convergence.
The velocity plot shows a similar trend. While initial deviations exist—primarily because velocity is not directly observed but inferred from system dynamics—the EKF estimate (magenta dashed line) quickly converges to the actual velocity (solid green line). Post-convergence, the estimates capture both amplitude and phase with high fidelity, though minor deviations persist, reflecting the effects of linearization and observation sparsity.
In this work, the mean squared error (MSE) is computed between the analytical steady-state displacement response and the predicted harmonic displacement response over the evaluated time window. The MSE is therefore defined in terms of time-history amplitude differences rather than spectral ordinates.
Table 3 presents quantitative performance metrics for evaluating the Extended Kalman Filter (EKF)’s accuracy in estimating both displacement and velocity. These metrics include the Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and the Maximum Absolute Error. The low error values confirm that the EKF provides reliable and accurate state estimation for both displacement and velocity, even in the presence of noise and sparse measurements.
Table 3.
Performance Metrics for EKF-Based Displacement and Velocity Estimation.
For displacement, the EKF demonstrates high estimation accuracy, with a low MAE of 7.50 × 10−4 m and an RMSE of 1.47 × 10−3 m. The small MSE value (2.16 × 10−6) indicates that the point-wise errors are consistently minimal. The maximum absolute displacement error is 1.08 × 10−2 m, which, although larger than the average errors, occurs during the initial transient period and reflects the impact of initialization uncertainty.
Velocity estimation errors are slightly higher, with an MAE of 3.70 × 10−3 m/s and an RMSE of 5.80 × 10−3 m/s. This increase is expected, as velocity is not directly observed but instead inferred through dynamic system equations. The MSE for velocity is 3.36 × 10−5, and the maximum absolute error reaches 3.02 × 10−2 m/s, likely due to the combined effects of model linearization and sparse observation intervals.
According to Figure 10, the EKF rapidly corrects initial discrepancies and provides reliable state estimates, as shown in the plots. The shaded areas highlight error bounds, reinforcing the filter’s stability and robustness across the entire time span.
Figure 10.
Time Evolution of Estimation Errors in EKF-Based Displacement and Velocity Tracking.
The error profile varies sharply at the beginning and stabilizes, with small oscillations around zero, after approximately 1 s. Initially, the velocity error is significant and predominantly negative (an underestimation of the initial velocity), but it gradually returns to near zero due to corrective effects from the observation. After 2 s, the velocity error fluctuates around zero, consistent with the frequency of external excitation, indicating some delay and residuals in the filter’s response to high-frequency dynamics.
The displacement error exhibits slight fluctuations, with its mean value close to zero, indicating that the filter is unbiased and robust in estimating displacement. The velocity error exhibits small fluctuations without systematic drift or divergence, reflecting a reasonable trade-off between process noise and observation noise in the EKF configuration.
The figure shows the time evolution of estimation errors for displacement (top) and velocity (bottom) from the Extended Kalman Filter (EKF) over a 20 s window. In the displacement plot, an initial sharp peak (around t = 0) with a maximum error of approximately 0.01 m decays within the first second, indicating fast convergence. The error then remains small and oscillates around zero, demonstrating stable tracking throughout the simulation.
The velocity error plot follows a similar trend, with a larger initial deviation of about −0.03 m/s due to the indirect velocity measurement. As more displacement observations are incorporated, the velocity error converges toward zero, with residual fluctuations ranging from 5 s to 15 s. Despite these fluctuations, the velocity error remains relatively low, confirming the EKF’s effectiveness with noisy and sparse data.
The present analysis is restricted to a linear SDOF oscillator subjected to harmonic excitation. This linear formulation serves as a controlled baseline for validating the PINN–EKF framework, enabling direct comparison with analytical solutions and facilitating evaluation of the algorithm’s numerical stability and convergence. While this assumption excludes nonlinearities such as material yielding, hysteresis, and geometric coupling, it provides a fundamental step toward developing and verifying adaptive dynamic models. Future extensions will address nonlinear SDOF and MDOF systems to further align the methodology with realistic seismic behavior and code-based dynamic analysis.
5. PINN Model for Fitting Displacement Curves
A novel parameter identification approach based on Physics-Informed Neural Networks (PINNs) is proposed to address the dynamic parameter estimation problem in single-degree-of-freedom (SDOF) systems subjected to harmonic excitation. By integrating the governing differential equations of motion directly into the neural network’s loss function, the method enables simultaneous, high-accuracy estimation of key system parameters, including the damping coefficient, excitation amplitude, and excitation frequency. This physics-guided learning framework enhances interpretability and reduces reliance on large datasets. Furthermore, experimental results demonstrate that the method is highly robust to observational noise, maintaining accuracy even under non-ideal measurement conditions. These findings suggest that the proposed PINN-based approach offers a promising alternative to traditional parameter identification methods for dynamical mechanical systems, particularly in scenarios where data are sparse or noisy.
Considering the motion equation of an SDOF linear vibration system subjected to harmonic excitation , where and represent the known mass and stiffness of the system, respectively. The parameters to be estimated include the damping coefficient , the excitation amplitude , and the excitation frequency . The system is assumed to start from rest, with initial conditions , .
Over the time interval , the system response is sampled at discrete time points, yielding observations . Because measurements are subject to noise, each observed displacement deviates from the actual displacement . These data serve as the basis for a data-driven inverse problem to estimate unknown dynamic parameters. A parameter estimation model is constructed to recover the values of , , and , despite the presence of noise. This inverse problem framework enables robust identification of dynamic excitation characteristics and damping behavior in noisy or underdetermined conditions.
The training of the hybrid PINN–EKF model involves a two-stage optimization process. In the first phase, the Adam optimizer is used with a learning rate of 0.001 to accelerate convergence during the initial training iterations (up to 10,000 steps). This stage ensures rapid adjustment of the network parameters toward the region of minimum error. In the second phase, the Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm is employed for fine-tuning, leveraging its quasi-Newton properties to achieve high numerical precision in the final solution. All input features were normalized to the [0, 1] range before training to enhance numerical stability. Hyperparameters, including the learning rate, batch size (32–128), and number of hidden-layer neurons (20–60), were tuned using a limited grid search with fivefold cross-validation to avoid overfitting. The optimization process significantly improved predictive accuracy, reducing the mean squared error of displacement estimation by approximately 40% compared to the non-optimized baseline configuration.
5.1. Data Preparation
Based on the preceding analysis, an analytical solution to the model is now available. Assuming zero initial conditions , , and using the parameter values specified in Section 3, the true values for a lightweight steel frame are computed as
Under these parameters, the exact solution of the model can be evaluated at any time . Although the total simulation time is 60 s, the first 10 s are uniformly sampled at time instants to generate a noise-free dataset .
In practical scenarios, displacement measurements are inevitably affected by errors, including sensor noise and environmental disturbances. To replicate such conditions, synthetic noise is added to the exact displacement values. The noisy observations are denoted by , where
Here, represents the standard deviation of the true displacement sequence , and defines the noise level. The noise parameter is set to . The Gaussian error simulates realistic measurement disturbances, ensuring that the parameter-estimation process is evaluated under conditions representative of real-world engineering applications.
In engineering practice, the ease of obtaining different physical parameters varies significantly. Properties like mass and stiffness can be measured directly. However, parameters such as the damping coefficient , excitation amplitude , and excitation frequency are not directly measurable and must be inferred. This challenge highlights the advantage of using PINNs, which combine physical models with observed data to estimate parameters that are otherwise difficult to measure accurately.
To simulate uncertainty in initial conditions, an initial guess perturbation of is introduced, leading to starting estimates of
This approach enables robust testing of the parameter identification method in a realistic, noise-affected setting.
5.2. PINN Framework
The core concept behind Physics-Informed Neural Networks (PINNs) is to approximate the proper solution of a physical system—in this case, the displacement response of a vibrating structure—using a neural network , where represents the set of trainable network parameters (weights and biases). Unlike traditional neural networks that rely solely on data, PINNs incorporate known physical laws directly into the learning process, enabling a data-efficient, physics-consistent approximation.
During training, the network is optimized to minimize a composite loss function composed of three terms, each enforcing a different type of constraint:
PDE Residual Minimization:
This term ensures that the neural network solution approximately satisfies the governing differential equation of the SDOF harmonic oscillator. Here, is the number of collocation points used to evaluate the differential equation and , , and are the second, first, and zeroth time derivatives of the neural network output, respectively, computed using automatic differentiation. By minimizing this residual, the model adheres to the physical law governing the system.
In the PINN residual formulation, Equation (52), the mass and stiffness are treated as known, fixed quantities obtained from experimental characterization of the tested system. In contrast, the damping coefficient , excitation amplitude , and excitation frequency are considered unknown trainable parameters and are incorporated into the network in the same manner as the trainable weights, meaning their values are iteratively updated through backpropagation during optimization. To ensure physical plausibility and numerical stability, bounds are imposed on these parameters based on expected measurement uncertainty and admissible ranges, thereby constraining their evolution during training.
Initial Condition Matching:
This term enforces the known initial conditions of the system, namely that the displacement and velocity are both zero at . These conditions reflect a system starting from rest, and their inclusion helps guide the neural network to a physically consistent solution at the initial time point.
Observation Data Fitting:
This data-driven term minimizes the discrepancy between the network-predicted displacement and the observed (potentially noisy) displacement measurements , taken at time instances . is the number of available data points. This ensures that the model remains anchored to real-world measurements.
By combining these three loss components:
The weights control the contribution of each loss component, ensuring that the solution satisfies the governing physics, initial conditions, and data fidelity. In this study, the values are set as .
A deep feedforward neural network is employed to model the dynamic behavior of an SDOF system under harmonic excitation. The network takes time as input and predicts the displacement response as output. Its architecture comprises four fully connected hidden layers, each with 50 neurons. The hyperbolic tangent activation function is used in all hidden layers to capture the system’s smooth and oscillatory dynamics, while the output layer uses a linear activation to produce the final displacement value. In this experiment, Glorot Normal was selected as the weight initialization method. For the optimizer schedule, Adam was used in the first phase with a learning rate of 0.001 for 10,000 iterations, followed by fine-tuning in the second phase with L-BFGS.
The unknown physical parameters—damping coefficient , excitation amplitude , and excitation frequency —are incorporated directly into the loss function and optimized simultaneously with the network weights. This physics-informed approach enables accurate and robust estimation of these parameters, even under conditions with limited or noisy measurements. The chosen architecture strikes a balance between computational efficiency and the expressive power required to represent the underlying physics of the vibrating system.
In the hybrid PINN–EKF framework, the EKF operates as a recursive correction layer within each training iteration. The PINN predicts the dynamic response based on the physics-informed residuals of the motion equation, while the EKF updates the estimated states and parameters by using the mismatch between the predicted and observed responses. This integration enables adaptive state estimation and real-time correction of the PINN outputs, thereby improving prediction accuracy and robustness under noise or parameter uncertainty.
5.3. Experimental Implementation and Data Synthesis
To evaluate the effectiveness of the proposed parameter estimation method, a synthetic dataset is generated based on the analytical model of a linear SDOF system. The simulation is conducted over a time interval with s. This interval is discretized uniformly into sampling points to provide sufficient resolution for both training and validation. The benchmark solution, denoted as , is obtained using the fourth-order Runge–Kutta method, which ensures high numerical accuracy for solving ordinary differential equations. To mimic real-world measurement conditions and assess the neural network model’s robustness, Gaussian white noise is added to the actual displacement. The resulting noisy observations are constructed as , where follows a normal distribution with zero mean and a variance proportional to the standard deviation of the actual signal, specifically . This synthetic noisy dataset is used for both training the network and testing its parameter inference capabilities.
where is the standard deviation of .
To ensure the physical plausibility of the network-estimated parameters, appropriate constraint formulations are incorporated into the model. In particular, the damping coefficient , which must be non-negative , is reparametrized using the softplus transformation, defined as the following equation:
This transformation preserves differentiability while enforcing positivity. Similarly, the excitation amplitude and the excitation frequency are also constrained to be strictly positive using the same softplus function.
These constraints help guide the optimization process within a physically meaningful solution space, preventing the neural network from converging to unfeasible parameter values.
A two-stage training strategy is implemented to ensure both fast convergence and precise parameter estimation. In the first stage, the Adam optimizer is employed for its robustness and adaptive learning rate properties. The initial learning rate is set to , and the optimizer is run for iterations. During this stage, different components of the total loss function are weighted to reflect their importance: the residual of the physical equation and the observation data loss are given unit weights during the initial displacement and velocity constraints and are each given a higher weight of 100.
This weighting scheme emphasizes enforcing initial conditions, which is critical for dynamic systems.
In the second stage, the Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimizer is used to refine the parameters obtained from the first stage. L-BFGS is a quasi-Newton optimization method known for its effectiveness in solving smooth, deterministic problems. The maximum number of iterations is set to , with a line search tolerance of to ensure convergence to a high-accuracy solution. This two-stage optimization framework enables the network to first broadly explore the parameter space and then precisely adjust the estimated parameters, yielding a stable and physically consistent model.
6. Results and Discussion
The results presented in this study are based on analytical harmonic solutions, which provide an exact and interpretable benchmark for assessing the predictive accuracy of the hybrid PINN–EKF framework. This controlled validation environment allows for precise evaluation of parameter estimation, noise sensitivity, and convergence performance. However, the absence of validation using experimental or spectrum-compatible seismic inputs limits the direct generalization of the findings to real earthquake conditions. Future work will therefore focus on extending the framework to recorded or synthetic ground motions and to nonlinear finite element simulations to establish stronger empirical consistency and enhance the model’s applicability for Eurocode 8-related analyses and code calibration.
6.1. Parameter Estimation
The PINN is trained using 1000 uniformly spaced samples over the first 10 s. Based on this input, the model predicts the displacement at each time step and iteratively refines estimates of the unknown parameters to converge to their true values. Table 4 presents the estimated values of the damping coefficient , excitation amplitude , and excitation frequency at every 1000 iterations of the Adam optimizer.
Table 4.
Parameter Estimates and Errors during Adam Training (every 1000 iterations).
The results in Table 4 show the PINN model’s training behavior over 10,000 iterations of the Adam optimizer. At the start (iteration 0), no parameter estimates are available, and all loss terms are undefined. As training begins, the loss values and parameter estimates start to evolve rapidly. The physics-informed loss exhibits considerable fluctuations throughout the training process. This is expected, as the model aims to balance physical constraints with data fitting, particularly under noisy conditions. Despite these fluctuations, the initial condition losses and quickly drop to the order of or smaller, indicating that the model rapidly satisfies the initial displacement and velocity constraints.
The data loss ( remains consistently low throughout training, reflecting a strong alignment with the observed displacement values. As the optimizer proceeds, the estimated parameters gradually converge toward their true values. For instance, the damping coefficient starts from an initial estimate of approximately 3967.2 Ns/m and smoothly decreases to 3966.6 Ns/m by iteration 10,000, closely matching the target value. Similarly, the excitation amplitude shows a steady decrease from 706.05 N to 697.26 N, and the excitation frequency approaches the true value from 5.676 rad/s to 5.679 rad/s.
The total training time also increases with the number of iterations, reaching approximately 246 s by the end of the Adam phase. These results demonstrate that the PINN model accurately estimates unknown physical parameters while satisfying the governing differential equations and initial conditions. The convergence of the estimates over time illustrates the effectiveness of the training strategy and the robustness of the model, even in the presence of synthetic noise.
In the second stage of training, the model is fine-tuned using the L-BFGS optimizer over a duration of 149.72 s. Table 5 presents the final estimates for the damping coefficient , excitation amplitude , and excitation frequency , which are 3966.629 Ns/m, 697.262 N, and 5.694 rad/s, respectively. Compared to the true values—3927.922 Ns/m, 700.000 N, and 5.655 rad/s—the relative errors are 0.985%, 0.391%, and 0.692%, all below 1%. These results confirm the PINN’s high accuracy in recovering unknown parameters. To simulate realistic measurement uncertainty, the initial guesses were intentionally perturbed by 1%. That the final errors fall below this threshold highlights the model’s robustness and reliable convergence, even when starting from imprecise initial conditions. The particularly low error in estimating further demonstrates the method’s effectiveness in identifying parameters that are difficult to measure directly in practical settings. The estimations reported in Table 5 correspond to deterministic runs under controlled harmonic excitation, with true parameter values stated in the analytical model; stochastic repeated realizations and confidence bounds are reserved for future work.
Table 5.
Final parameter estimation results.
An examination of Table 4 indicates that the damping coefficient ccc changes only slightly from its initial guess, highlighting the importance of a well-chosen starting point for this parameter. In contrast, the excitation amplitude adjusts rapidly during training, suggesting lower sensitivity to its initial value, though extended iterations without control may lead to deviation. The excitation frequency displays a moderate rate of adjustment, gradually refining as the model progresses.
As training advances, the PDE residual loss generally decreases, though the rate of improvement diminishes over time. This behavior highlights the importance of balancing the number of iterations and computational cost to achieve both efficiency and accuracy in parameter estimation.
Using the final parameter estimates, the displacement-time response is plotted in Figure 11, with the predicted curve overlaid on the ground-truth solution and noisy observational data. Table 6 reports the relative displacement errors at the first ten sampled time points, while the mean squared error across all 1000 samples further confirms the model’s ability to replicate the true dynamic behavior with high precision.
Figure 11.
Displacement Response Comparison.
Table 6.
Estimated, True, and Observed Displacements with Relative Errors (First 10 Points).
Figure 11 illustrates the dynamic behavior of the SDOF system under harmonic excitation, comparing the displacement predicted using actual parameters, estimated parameters, and the noisy observation points. The solid blue curve represents the ground-truth displacement response, while the dashed green curve shows the predicted response based on the final PINN estimates. Red dots denote the noisy observational data points used to train the model.
The estimated response closely follows the actual curve throughout the 10 s time window, with both curves exhibiting nearly identical amplitude and frequency characteristics. This close alignment highlights the accuracy of the final parameter estimates for the damping coefficient , excitation amplitude , and frequency . Minor deviations between the estimated and true curves, particularly toward the end of the time interval, are negligible and fall within acceptable margins, demonstrating the robustness of the model even in the presence of synthetic measurement noise.
The red observation points are densely distributed and consistently lie along both curves, confirming that the PINN successfully reconciles physical laws with empirical data. The visual agreement among the proper response, the model prediction, and the noisy data confirms that the trained network not only fits the data but also generalizes well to represent the system’s underlying dynamics. This result reinforces the model’s effectiveness in capturing the system’s actual behavior with only limited, noisy measurements.
Variables in Figure 12 include PDE loss, initial condition loss, observation loss, and the relative errors of the three estimated parameters. The color scale and numerical values denote Pearson correlation coefficients ranging from −1 to 1. The results show that PDE loss and observation loss are strongly and negatively correlated with parameter errors, indicating that reducing these losses directly decreases estimation errors. In contrast, initial condition loss exhibits only a weak positive correlation with parameter errors, suggesting a minimal role in influencing accuracy. Consequently, PDE loss (enforcing physical constraints) and observation loss (ensuring data fidelity) emerge as the primary optimization objectives for error reduction. This outcome is consistent with the adopted loss weighting λ = [0.5, 0.02, 1.0], confirming the rational design of the PINN loss function. The negligible impact of the initial condition term further validates the choice to assign it a lower weight, reinforcing the scientific basis of the overall loss allocation strategy.
Figure 12.
Heatmap of Correlation Matrix for Loss and Error.
The results in Table 6 indicate that, except for the endpoint at t = 10 s and a few peak or trough positions, the relative displacement error across sampled points remains consistently low, generally below 1%. Over the entire 1000-point time series, the mean squared error (MSE) in relative displacement is calculated as 3.28%, affirming the model’s capacity to retain accuracy even in the presence of measurement noise and perturbed parameters. This table compares accurate displacements, noisy observations, and the PINN-predicted displacements for the first 10 time steps. The estimates demonstrate excellent agreement with the true values, with relative errors predominantly under 0.5%. At t = 0.00 s, both true and predicted displacements are zero, yielding no error. As time progresses, the model continues to closely track the true displacement curve. For example, at t = 0.04 s, the true value is 2.80 × 10−6 m, while the predicted value is 2.81 × 10−6 m, corresponding to a relative error of just 0.2946%. At t = 0.09 s, the relative error drops further to 0.28%, illustrating the model’s precision.
Despite fluctuations in observed values caused by Gaussian noise, the PINN successfully suppresses this noise and extracts the underlying dynamics. At t = 0.02 s, the estimated displacement remains nearly indistinguishable from the actual value, with an error of only 1.05 × 10−7%. These findings confirm the PINN’s effectiveness in reconstructing the displacement-time response and accurately estimating system behavior under noisy conditions.
In the numerical experiments described above, both the added data noise and the initial guess perturbations were relatively small. In real-world engineering, however, measurement biases can be significantly larger and can substantially affect parameter estimation accuracy. The following sections investigate how three factors—sampling size, measurement noise, and initial-guess bias—impact the final parameter estimates.
Figure 13 compares the fitted displacement-time curve with the actual response, along with the residuals at the corresponding time points. Consistent with the results in Table 6, the relative displacement error remains generally below 1%, confirming the PINN’s effectiveness in reconstructing displacement histories. The residuals fluctuate randomly around zero, without any systematic upward or downward trend, further indicating the absence of model bias. While slightly larger deviations occur at the peaks and troughs of the displacement curve, these errors remain small compared to the overall displacement amplitude. Therefore, the predicted curve closely matches the actual response in both amplitude and frequency characteristics, demonstrating that the model successfully captures the essential physical dynamics and provides highly reliable fitting results.
Figure 13.
Displacement Comparison and Residual Analysis.
6.2. Effect of Sampling Size, Measurement Noise, and Initial-Guess Bias
In the baseline experiment, 1000 samples were used over a 10 s interval. While increasing the number of samples can enhance accuracy, it also increases computational costs. To evaluate this trade-off, three cases were tested with 1000, 2000, and 5000 samples, respectively. The estimated parameters and corresponding run times for each case are presented in Table 7.
Table 7.
Parameter Estimates, Relative Errors, and Run Times for Different Sampling Sizes.
As shown in Table 7, increasing the sample size from 1000 to 5000 has minimal impact on the final parameter estimates, which remain virtually unchanged. This suggests that PINNs can achieve high accuracy without relying on excessively large datasets. However, the computational cost increases significantly—exceeding 2000 s for 5000 samples—and the relative error for ω even rises slightly at this highest sampling rate. These findings suggest that using 1000 samples offers an optimal balance between computational efficiency and estimation accuracy. Therefore, all subsequent experiments are conducted with 1000 samples.
In practical scenarios, displacement measurements are affected by noise, with magnitudes that can vary considerably. In the baseline case, noise was added with a standard deviation of of the signal’s standard deviation. To assess the sensitivity of the model to different noise levels, additional tests were conducted using and . The resulting parameter estimates and runtime are summarized in Table 8.
Table 8.
Parameter Estimates, Relative Errors, and Runtime for Different Noise Levels.
As shown in Table 8, the runtime remains essentially constant across varying noise levels, indicating that noise magnitude does not significantly affect computational efficiency. The estimates for and vary only slightly as noise increases. In contrast, ’s accuracy degrades markedly: its relative error grows from 0.690% at 1% noise to 6.313% at 10% noise—nearly a ninefold increase—showing that frequency estimation is highly sensitive to measurement noise.
Since the parameters , , and cannot be directly measured. Their initial estimates are often subject to significant error. More accurate initial guesses tend to produce better displacement predictions. Given the sensitivity of PINN performance to these starting values, an analysis was conducted using initial estimate biases of , and . The corresponding parameter estimates and runtime are presented in Table 9.
Table 9.
Parameter Estimates, Relative Errors, and Run Times for Different Initial-Guess Biases.
According to Table 9, as the initial guess error increases, all parameter estimation errors grow, confirming that a good starting point is crucial. However, and still achieve errors below the bias level, indicating the model’s convergence capability: with sufficient iterations, they approach the true values. In contrast, ’s relative error exceeds the initial bias, revealing its particular sensitivity to poor initialization. Thus, when the initial bias is large, one should increase the number of training iterations to allow the PINN to converge, at the cost of greater computational expense.
The robustness of the proposed hybrid model was assessed by examining the sensitivity of estimation accuracy to changes in several key factors. In the numerical experiments, all parameters except one were held constant, allowing only a single variable—such as measurement noise, data volume, or initial parameter error—to vary at a time. Each configuration was tested through 30 independent runs, from which the average prediction accuracy and standard deviation were computed. This analysis revealed that the damping ratio and excitation amplitude estimation errors remained below 1.5% and 0.8%, respectively, even under 15% Gaussian noise, indicating strong robustness to measurement uncertainty. The low variance across repeated trials demonstrates the stability of the proposed approach. Future research will incorporate Monte Carlo–based uncertainty quantification to provide explicit confidence intervals for both time-response and parameter estimates.
Figure 14 illustrates the accuracy of parameter identification and highlights the relative influence of initial guess bias versus data volume. The results show that initial bias has a much more substantial impact on identification accuracy than the number of sampled data points. As the ratio of the estimated parameter to the true value increases from 1.01 to 1.10, the relative error of c rises markedly from about 0.99% to nearly 9.99%. By contrast, varying the data volume from 1000 to 5000 samples produces only minor changes, with the relative error remaining almost constant at ~0.99%.
Figure 14.
Heatmap of Relative Error for Parameter c (Example with σu = 1%).
These findings are consistent with Table 9, where the relative error of c increases sharply from 0.985% to 9.994% as the initial bias grows from 1% to 10%, mirroring the transition from light to dark colors in the heatmap. Likewise, Table 7 supports this observation by confirming that larger data volumes yield negligible improvements in c recognition accuracy. Therefore, these results demonstrate the excellent data efficiency of the PINN method, while underscoring the sensitivity of parameter estimation to the quality of initialization.
From a practical standpoint, this guides the selection of initial parameter estimates and the determination of appropriate data collection volumes. It also highlights that the damping coefficient c remains particularly difficult to identify accurately in Eurocode 8 applications, which explains the reliance of traditional codes on conservative estimates.
Figure 15 presents the relative error of the frequency parameter ω in the form of a contour plot. The horizontal axis denotes the ratio of the initial estimate to the true parameter value, while the vertical axis represents the percentage of measurement noise. Contour lines indicate the relative error of ω, ranging from 0% to 12%. The results demonstrate that the model achieves high accuracy in identifying ω, with relative errors typically ranging from 0.02% to 0.10%. The low-error region (relative error < 5%) is concentrated in the range of noise = 0–0.08 and init = 1.00–1.04, corresponding to conditions of low measurement noise and minor initial deviations. However, when the initial estimate exceeds 1.08 times the true value, the relative error rapidly rises above 10%, indicating significant deterioration in accuracy.
Figure 15.
Contour Plot of Relative Error for Parameter omega.
These findings show that ω is far more sensitive to initialization quality than to data noise. This provides a quantitative guideline for experimental design, emphasizing the importance of careful initialization to prevent estimation failures and ensure the reliable identification of frequency parameters.
6.3. Comparison of Fitting Performance for Displacement Curves
To compare the displacement-time curves fitted by the PINN and data assimilation methods with the exact solution of the model, the three curves are plotted over the interval [0, 20 s], as shown in Figure 16. Both fitted curves closely match the numerical solution, demonstrating high accuracy and reliable fitting performance. It is also noted that while EKF performs well under weakly nonlinear conditions, its accuracy diminishes when strong nonlinearity or non-Gaussian disturbances dominate, as discussed in Section 4.1. In such cases, the PINN framework offers superior stability.
Figure 16.
Comparison of Displacement Predictions from Classical Eurocode 8, EKF, and PINN Methods.
According to Figure 16, the EKF method exhibits a significant deviation from the model’s numerical solution within the initial 1 s, primarily due to insufficient initial data. Additionally, the fitting accuracy is lower at the peaks and troughs of the displacement curve compared to other regions; however, the overall fitting performance remains satisfactory across most time intervals. A notable advantage of the EKF approach is its efficiency: it requires only a small number of sampling points (200 within 20 s) and completes the computation in under 1 s.
In contrast, the PINN method provides consistently accurate fitting across nearly all points, with relatively minor errors even at the peaks and troughs. Importantly, these deviations do not result in significant error values. However, this higher accuracy comes at the cost of increased computational resources, including a denser sampling strategy (1000 data points within 10 s) and a longer computation time (exceeding 300 s).
The results indicate that both the PINN and data assimilation methods are capable of effectively fitting solutions to single-degree-of-freedom harmonic excitation problems. The choice of method should be guided by the specific requirements of the application, balancing computational efficiency and fitting accuracy.
Although the proposed hybrid framework demonstrates high accuracy and robustness in numerical experiments, its practical deployment requires consideration of computational cost and real-time feasibility. The training of PINN primarily contributes to the computational demand, particularly during the offline learning phase. However, once trained, the PINN’s inference stage and the EKF’s recursive updates are computationally lightweight and suitable for online or edge applications. For example, the EKF update step scales linearly with the number of system states, and the trained PINN model can operate efficiently on embedded processors or GPU-enabled devices with reduced precision (e.g., float16). Further optimization can be achieved through model compression, pruning, and transfer learning to adapt pre-trained models to new structures with minimal retraining cost. Therefore, while the framework is currently evaluated in a research setting, its architecture is inherently compatible with real-time structural monitoring and control systems, offering a feasible path to deployment on digital twin and smart infrastructure platforms. Future work will focus on implementing the proposed hybrid algorithm in embedded and edge computing environments to enable real-time monitoring and control of structural systems.
In this study, a linear harmonic SDOF configuration is used as a conceptual benchmark to evaluate the performance of the proposed PINN–EKF framework under controlled dynamic conditions. While relevant to the broader themes of Eurocode-based dynamic amplification, direct comparison with spectrum-compatible seismic ground motions will be addressed in future extensions of this framework. It should be noted that real seismic excitation involves transient, broadband inputs, nonlinear hysteretic behavior, and multi-degree-of-freedom (MDOF) coupling. As such, the present formulation does not replace Eurocode 8 response-spectrum procedures but rather provides a methodological foundation for adaptive dynamic estimation that can be expanded to address seismic loading conditions. The current work is therefore limited to linear harmonic excitation of an SDOF system. Future research will extend the framework to spectrum-compatible seismic time histories, bilinear and nonlinear SDOF models, and multi-degree-of-freedom coupling to enable more direct alignment with Eurocode 8 design provisions and response-spectrum methodologies.
6.4. Enhancement of DMF Calculations for Eurocode 8 Using Hybrid Modeling
The Dynamic Magnification Factor (DMF) is a crucial parameter in Eurocode 8 (BS EN 1998) used to assess how much a structure’s dynamic displacement exceeds its static displacement under harmonic or seismic excitation. The current code framework estimates DMF using fixed expressions that assume simplified damping models and do not consider real-time variability in loading or system properties. This section presents a machine-learning-assisted refinement of DMF predictions, introducing an adaptive formulation based on displacement estimates from Physics-Informed Neural Networks (PINNs) and Extended Kalman Filters (EKFs).
Eurocode 8 employs a response spectrum-based approach, where the amplification of the dynamic response is estimated using the frequency ratio , and damping ratio . As presented in Section 2.3, the classical DMF expression for an SDOF system is , expressed in Equation (21).
This formulation is based on steady-state harmonic excitation but assumes constant damping and no system variability. To increase precision, we propose a dynamic formulation of DMF based on the displacement predicted by our PINN model. PINNs approximate the solution of the governing differential equation as presented in Equation (1) .
The PINN is trained to satisfy this equation while minimizing discrepancies from sparse or noisy displacement data. Once trained, it provides an accurate estimate of the displacement amplitude under harmonic excitation.
The enhanced DMF is then formulated as follows:
This allows time-dependent, damping-sensitive amplification, enabling the engineer to monitor DMF in real time and account for transient effects.
Using the EKF, the system’s displacement and velocity are recursively estimated from noisy sensor data. The predicted state vector includes both position and velocity. DMF can be recomputed at each time step as
This real-time update enables designers and structural monitoring systems to effectively track peak response ratios under varying frequency inputs, such as those caused by ground motion. It also supports informed decision-making for adjusting structural design or implementing retrofitting measures when needed. Moreover, it facilitates the activation of damping or vibration control systems once critical thresholds are exceeded, thereby enhancing the structure’s resilience and safety under dynamic loading conditions.
The proposed machine learning-based refinement of the DMF offers several key advantages over the static formulations currently prescribed in Eurocode 8. Unlike the traditional approach, which assumes fixed damping and loading conditions, the refined model is adaptive to changing excitation characteristics and variations in structural parameters. This adaptability enables more accurate predictions, particularly in the resonance region, where even minor changes in the frequency ratio can result in significant fluctuations in amplification. Additionally, the model supports ongoing serviceability assessments by enabling real-time monitoring and evaluation during a structure’s operational phase, rather than being limited to the design stage. Given these benefits, the enhanced DAF formulation may be proposed as an annex or supplementary method to Eurocode 8, offering a higher-fidelity alternative for dynamic assessments in applications where precision and real-time responsiveness are critical.
Figure 17 presents a comparative analysis of Dynamic Magnification Factor (DMF) predictions derived from three distinct approaches: the classical Eurocode 8 method, the PINN-based method, and the EKF-based method. The Eurocode 8 method follows the standard DMF formula, assuming constant damping and linear system behavior. The PINN approach, on the other hand, uses a neural network trained on the governing differential equations of an SDOF system under harmonic excitation, enabling adaptive DMF predictions that account for variable damping and system nonlinearities. The EKF method provides real-time state estimation, enabling dynamic DMF calculations that adjust to changing system parameters and external excitations.
Figure 17.
Comparison of DMF Predictions from Classical Eurocode 8, PINN, and EKF Methods.
As shown in Figure 17, the machine learning-enhanced methods (PINN and EKF) offer more accurate and adaptive DMF predictions, particularly near resonance conditions . In this region, the classical method can be overly conservative or fail to capture parameter sensitivities, whereas the machine learning approaches provide a more precise representation of the system’s dynamic behavior. These enhancements highlight the potential for integrating machine learning techniques into structural design codes to improve safety and efficiency.
According to the expression for the Dynamic Magnification Factor (DMF) in Equation (21), the DMF curve depends solely on the damping ratio (or equivalently, the damping coefficient ) and is independent of the excitation amplitude as well as any specific value of the excitation frequency .
Specifically, variations in (changing ) significantly affect the peak value and bandwidth of the DMF curve: lower damping results in a higher peak and a sharper curve, while higher damping leads to a lower peak and a flatter, broader response. In contrast, changing merely scales the amplitude of the displacement response proportionally without altering the shape of the DMF curve.
Although appears as the independent variable in the DMF expression, it serves as a sweep parameter along the horizontal axis rather than a fixed input value. The DMF curve thus represents the system’s theoretical frequency response characteristics rather than the outcome of a particular excitation frequency in an experiment.
When using the data assimilation method and the PINN model to fit the displacement curve and estimate the parameters , , and , only the estimated value of influences the shape of the corresponding DMF curve, as illustrated in Figure 17.
The results show that the parameter estimated by the PINN model closely matches the true DMF curve, demonstrating high fitting accuracy. In comparison, the EKF estimate is slightly less accurate. The PINN model uses more sampling points and requires longer training time, resulting in lower efficiency than the data assimilation method. However, this comes with the advantage of higher accuracy, as the estimated value of from the PINN model is closer to the actual value. Therefore, both methods exhibit distinct strengths and limitations. The choice between them should be based on specific application requirements, balancing computational efficiency and estimation accuracy.
Figure 18 presents the Bootstrap confidence interval for parameter c. Using 2000 resamples (B = 2000), the 95% confidence interval is [3966.70, 4140.75], bounded by the red dashed lines. The bootstrap distribution exhibits approximately normal shape, with the mean (centered around 4050, indicated by the solid black line) lying well within the interval. The relatively narrow interval, which also encompasses the actual parameter value, demonstrates both the precision and accuracy of the estimation. Therefore, these findings confirm the statistical robustness of the proposed parameter identification method.
Figure 18.
Bootstrap Confidence Interval for Parameter c.
6.5. Physical Interpretation and Relevance of the Estimated Parameters
The estimated parameters obtained through the hybrid PINN–EKF framework—particularly stiffness (), damping ratio (), and natural frequency ()—carry direct physical significance that extends beyond numerical prediction accuracy. Variations in the identified stiffness parameter can indicate potential stiffness degradation, connection looseness, or material softening within the structural system, serving as an early indicator of damage or reduced load-carrying capacity. The damping ratio quantifies the system’s energy dissipation capacity; consistent or increased damping estimates suggest stable or enhanced dynamic resilience, while a reduction in may imply reduced energy absorption capability due to cracking, fatigue, or boundary deterioration. The accurate estimation of excitation amplitude and frequency further aids in identifying external loading conditions, providing valuable context for vibration-sensitive or seismic-prone structures.
From a design-code perspective, these physics-informed estimates help evaluate and potentially calibrate assumptions in standards such as Eurocode 8, where damping correction factors and stiffness-dependent response spectra play a central role in seismic performance prediction. Connecting analytical parameters to measurable physical responses, the hybrid framework bridges the gap between dynamic system identification and structural health monitoring, offering a pathway for future integration into performance-based design and code calibration procedures.
7. Conclusions
This study presented a hybrid physics-informed and data-driven framework integrating the Extended Kalman Filter (EKF) with Physics-Informed Neural Networks (PINNs) for adaptive estimation of structural dynamic responses. The approach demonstrated high numerical accuracy and stability under varying damping ratios and noise levels, highlighting its potential as an adaptive alternative to conventional Eurocode 8-based dynamic amplification and response estimation procedures. The results emphasize the PINN’s capacity to advance structural dynamics modeling beyond simplified design code assumptions. Its ability to extract physical parameters from limited or noisy data makes it particularly suitable for seismic-prone and vibration-sensitive structures.
The proposed framework provides adaptive and robust predictions of vibratory behavior, supporting real-time structural health monitoring, intelligent control, and design verification. Compared to the EKF, the PINN exhibits superior predictive performance, particularly in transient and high-sensitivity response phases. It maintains high displacement accuracy across 1000 time samples, even in the presence of noise or biased initial guesses, with relative errors below 1% in damping, excitation amplitude, and frequency identification. These findings confirm the method’s robustness and practical value for vibration-sensitive infrastructure and future code-based design refinement.
Beyond these quantitative results, the broader contribution of this work lies in establishing a unified computational framework that merges analytical modeling, data assimilation, and machine learning for real-time system identification. The current validation is limited to a linear single-degree-of-freedom oscillator under harmonic excitation, serving as a conceptual benchmark. Future work will extend this framework to nonlinear, multi-degree-of-freedom systems subjected to recorded or spectrum-compatible seismic ground motions, enabling a more direct alignment with Eurocode 8 calibration and performance-based design.
From a practical perspective, this hybrid model shows strong potential for integration into performance-based seismic design workflows, structural health monitoring, and digital twin systems. Through further development, the methodology can evolve from proof-of-concept validation toward a fully deployable tool for modern structural engineering practice—supporting the modernization of predictive models used in standards such as Eurocode 8 [47,48,49] and contributing to the creation of resilient and intelligent infrastructure systems.
Possible Directions for Future Studies
Building on the promising results of this study, several directions for future research emerge that could further enhance the applicability and generalization of the proposed hybrid framework. One key area is extending the methodology to multi-degree-of-freedom (MDOF) systems and complex structural assemblies, where interactions between modes and higher-order effects are critical. Investigating the framework’s performance under non-harmonic and stochastic excitations—such as seismic ground motions, wind loads, or traffic-induced vibrations—would provide deeper insights into its robustness under real-world loading conditions. Moreover, integrating more sophisticated optimization techniques, such as Bayesian inference or evolutionary algorithms, could improve convergence speed and uncertainty quantification in parameter estimation. Coupling the PINN and EKF components with real-time sensor networks would enable live structural health monitoring systems that adapt to changing conditions and operational demands. Finally, large-scale experimental validation and collaboration with industry partners could help benchmark the framework’s predictive accuracy against full-scale structural tests, ultimately informing the development of performance-based design guidelines and contributing to formal updates of Eurocode 8 and BS 5400. These advancements would strengthen the scientific foundation of structural dynamics and promote the adoption of intelligent, data-driven approaches in engineering design and safety assessment.
Author Contributions
Conceptualization, A.A.P., K.D.T., Y.L. and S.Z.; Methodology, A.A.P., K.D.T., Y.L. and S.Z.; Software, A.A.P. and Y.L.; Validation, A.A.P., K.D.T., Y.L. and S.Z.; Formal analysis, A.A.P., K.D.T., Y.L. and S.Z.; Investigation, A.A.P., K.D.T., Y.L. and S.Z.; Resources, Y.L.; Data curation, A.A.P. and Y.L.; Writing—original draft, A.A.P.; Writing—review and editing, A.A.P., K.D.T., Y.L. and S.Z.; Visualization, Y.L.; Supervision, A.A.P., K.D.T. and S.Z.; Project administration, A.A.P.; Funding acquisition, A.A.P. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by: Science and Technology Department of Sichuan Province (2022YFWZ0010).
Data Availability Statement
The datasets used or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Adeli, H.H. Neural networks in civil engineering: 1989–2000. Comput. Aided Civ. Infrastruct. Eng. 2001, 16, 126–142. [Google Scholar] [CrossRef]
- Kicinger, R.; Arciszewski, T.; De Jong, K. Evolutionary computation and structural design: A survey of the state-of-the-art. Comput. Struct. 2005, 83, 1943–1978. [Google Scholar] [CrossRef]
- Saka, M.P.; Geem, Z.W. Mathematical and metaheuristic applications in design optimization of steel frame structures: An extensive review. Math. Probl. Eng. 2013, 2013, 271031. [Google Scholar] [CrossRef]
- Xie, Y.; Ebad Sichani, M.; Padgett, J.E.; DesRoches, R.R. The promise of implementing machine learning in earthquake engineering: A state-of-the-art review. Earthq. Spectra 2020, 36, 1769–1801. [Google Scholar] [CrossRef]
- De Lautour, O.R.; Omenzetter, P. Prediction of seismic-induced structural damage using artificial neural networks. Eng. Struct. 2009, 31, 600–606. [Google Scholar] [CrossRef]
- Oh, B.K.; Park, Y.; Park, H.S. Seismic response prediction method for building structures using convolutional neural network. Struct. Control Health Monit. 2020, 27, e2519. [Google Scholar] [CrossRef]
- Gharehbaghi, S.; Yazdani, H.; Khatibinia, M. Estimating inelastic seismic response of reinforced concrete frame structures using a wavelet support vector machine and an artificial neural network. Neural Comput. Appl. 2020, 32, 2975–2988. [Google Scholar] [CrossRef]
- Hait, P.; Sil, A.; Choudhury, S. Seismic damage assessment and prediction using artificial neural network of RC building considering irregularities. J. Struct. Integr. Maint. 2020, 5, 51–69. [Google Scholar] [CrossRef]
- Kim, T.; Kwon, O.S.; Song, J. Deep learning based seismic response prediction of hysteretic systems having degradation and pinching. Earthq. Eng. Struct. Dyn. 2023, 52, 2384–2406. [Google Scholar] [CrossRef]
- Liao, Y.; Tang, H.; Li, R.; Ran, L.; Xie, L. Response Prediction for Linear and Non-linear Structures Based on Data-Driven Deep Learning. Appl. Sci. 2023, 13, 5918. [Google Scholar] [CrossRef]
- Hareendran, S.P.; Alipour, A. Prediction of non-linear structural response under wind load using deep learning techniques. Appl. Soft Comput. 2022, 129, 109424. [Google Scholar] [CrossRef]
- De Iuliis, M.; Miceli, E.; Castaldo, P. Machine learning modelling of structural response for different seismic signal characteristics: A parametric analysis. Appl. Soft Comput. 2024, 164, 112026. [Google Scholar] [CrossRef]
- Won, J.; Shin, J. Machine learning-based approach for seismic damage prediction method of building structures considering soil-structure interaction. Sustainability 2021, 13, 4334. [Google Scholar] [CrossRef]
- Luzi, L.; Puglia, R.; Russo, E.; ORFEUS WG5. Engineering Strong Motion Database, version 1.0; Observatories & Research Facilities for European Seismology; Istituto Nazionale di Geofisica e Vulcanologia: Rome, Italy, 2016.
- Mousavi, S.M.; Sheng, Y.; Zhu, W.; Beroza, G.C. STanford EArthquake Dataset (STEAD): A Global Data Set of Seismic Signals for AI. IEEE Access 2019, 7, 179464–179476. [Google Scholar] [CrossRef]
- Bakhshi, H.; Bagheri, A.; Ghodrati Amiri, G.; Barkhordari, M.A. Estimation of spectral acceleration based on neural networks. Proc. Inst. Civ. Eng. Struct. Build. 2014, 167, 457–468. [Google Scholar] [CrossRef]
- Khosravikia, F.; Clayton, P.; Nagy, Z. Artificial neural network-based framework for developing ground-motion models for natural and induced earthquakes in Oklahoma, Kansas, and Texas. Seismol. Res. Lett. 2019, 90, 604–613. [Google Scholar] [CrossRef]
- Cabalar, A.F.; Cevik, A. Genetic programming-based attenuation relationship: An application of recent earthquakes in turkey. Comp. Geosci. 2009, 35, 1884–1896. [Google Scholar] [CrossRef]
- Hamze-Ziabari, S.M.; Bakhshpoori, T.T. Improving the prediction of ground motion parameters based on an efficient bagging ensemble model of M5# and CART algorithms. Appl. Soft Comput. 2018, 68, 147–161. [Google Scholar]
- Akhani, M.; Kashani, A.R.; Mousavi, M.; Gandomi, A.H. A hybrid computational intelligence approach to predict spectral acceleration. Measurement 2019, 138, 578–589. [Google Scholar] [CrossRef]
- Zhang, T.; Xu, W.; Wang, S.; Du, D.; Tang, J. Seismic response prediction of a damped structure based on data-driven machine learning methods. Eng. Struct. 2024, 301, 117264. [Google Scholar] [CrossRef]
- De Iuliis, M.; Miceli, E.; Castaldo, P. Information theory-guided machine learning to estimate seismic response of non-linear SDOF structures. Eng. Struct. 2025, 336, 120448. [Google Scholar] [CrossRef]
- Zhao, C.; Zhu, Y.; Zhou, Z. Machine learning-based approaches for predicting the dynamic response of RC slabs under blast loads. Eng. Struct. 2022, 273, 115104. [Google Scholar] [CrossRef]
- Yang, X.; Lei, Y.; Wang, J.; Zhu, H.; Shen, W. Physics-enhanced machine learning-based optimization of tuned mass damper parameters for seismically-excited buildings. Eng. Struct. 2023, 292, 116379. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, S.; Huang, D.; Xiong, F.; Li, W.; Yang, Q. Application of Artificial Neural Networks and Multiple Linear Regression on Local Bond Stress Equation of UHPC and Reinforcing Steel Bars. Sci. Rep. 2021, 11, 15061. [Google Scholar] [CrossRef] [PubMed]
- Pishro, A.A.; Yang, Q.; Zhang, S.; Pishro, M.A.; Zhang, Z.; Zhao, Y.; Postel, V.; Huang, D.; Li, W. Node, Place, Ridership, and Time model for Rail-Transit Stations: A Case Study. Sci. Rep. 2022, 12, 16120. [Google Scholar] [CrossRef]
- Pishro, A.A.; L’hostis, A.; Chen, D.; Pishro, M.A.; Zhang, Z.; Li, J.; Zhao, Y.; Zhang, L. The Integrated ANN-NPRT-HUB Algorithm for Rail-Transit Networks of Smart Cities: A TOD Case Study in Chengdu. Buildings 2023, 13, 1944. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, S.; Hu, Q.; Zhang, Z.; Pishro, M.A.; Zhang, L.; L’hOstis, A.; Hejazi, F.; Liu, Y.; Zhao, Y. Advancing ultimate bond stress–slip model of UHPC structures through a novel hybrid machine learning approach. Structures 2024, 62, 106162. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, Z.; Pishro, M.A.; Xiong, F.; Zhang, L.; Yang, Q.; Matlan, S.J. UHPC-PINN-Parallel Micro Element System for the Local Bond Stress–Slip model subjected to monotonic loading. Structures 2022, 46, 570–597. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, Z.; Pishro, M.A.; Liu, W.; Zhang, L.; Yang, Q. Structural Performance of EB-FRP-Strengthened RC T-Beams Subjected to Combined Torsion and Shear Using ANN. Materials 2022, 15, 4852. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, S.; Zhang, Z.; Zhao, Y.; Pishro, M.A.; Zhang, L.; Yang, Q.; Postel, V. Structural Behavior of FRP-Retrofitted RC Beams Under Combined Torsion and Bending. Materials 2022, 15, 3213. [Google Scholar] [CrossRef]
- Pishro, A.A.; Zhang, S.; L’hOstis, A.; Liu, Y.; Hu, Q.; Hejazi, F.; Shahpasand, M.; Rahman, A.; Oueslati, A.; Zhang, Z. Machine learning-aided hybrid technique for dynamics of rail transit stations classification: A case study. Sci. Rep. 2024, 14, 23929. [Google Scholar] [CrossRef] [PubMed]
- Pishro, A.A.; Feng, X.; Ping, Y.; Dengshi, H.; Shirazinejad, R.S. Comprehensive Equation of Local Bond Stress Between UHPC and Reinforcing Steel Bars. Constr. Build. Mater. 2020, 262, 119942. [Google Scholar] [CrossRef]
- Ziaee, M.; Hejazi, F. Developed steady-state response for a new hybrid damper mounted on structures. PLoS ONE 2023, 18, e0290248. [Google Scholar] [CrossRef]
- Hamzah, M.K.; Hejazi, F. Development of adjustable variable stiffness restrainer for bridge subjected to seismic excitation. PLoS ONE 2023, 18, e0286977. [Google Scholar] [CrossRef]
- Asgarkhani, N.; Kazemi, F.; Jakubczyk-Gałczyńska, A.; Mohebi, B.; Jankowski, R. Seismic response and performance prediction of steel buckling-restrained braced frames using machine-learning methods. Eng. Appl. Artif. Intell. 2024, 128, 107388. [Google Scholar] [CrossRef]
- Shahnazaryan, D.; O’Reilly, G.J. Next-generation non-linear and collapse prediction models for short-to long-period systems via machine learning methods. Eng. Struct. 2024, 306, 117801. [Google Scholar] [CrossRef]
- Payán-Serrano, O.; Bojórquez, E.; Carrillo, J.; Bojórquez, J.; Leyva, H.; Rodríguez- Castellanos, A.; Carvajal, J.; Torres, J. Seismic Performance Prediction of RC, BRB and SDOF structures using deep learning and the intensity measure INp. AI 2024, 5, 1496–1516. [Google Scholar] [CrossRef]
- Nguyen, H.D.; Dao, N.D.; Shin, M. Machine learning-based prediction for maximum displacement of seismic isolation systems. J. Build. Eng. 2022, 51, 104251. [Google Scholar] [CrossRef]
- Abdellatif, B.; Benazouz, C.; Ahmed, M. Dynamic response estimation of an equivalent single degree of freedom system using artificial neural network and non-linear static procedure. Res. Eng. Struct. Mater. 2024, 10, 431–444. [Google Scholar] [CrossRef]
- Hammal, S.; Bourahla, N.; Laouami, N. Neural-network based prediction of inelastic response spectra. Civ. Eng. J. 2020, 6, 1124–1135. [Google Scholar] [CrossRef]
- Dwairi, H.M.; Tarawneh, A.N. Artificial neural networks prediction of inelastic displacement demands for structures built on soft soils. Innov. Infrastruct. Solut. 2022, 7, 4. [Google Scholar] [CrossRef]
- Demir, A.; Sahin, E.K.; Demir, S. Advanced tree-based machine learning methods for predicting the seismic response of regular and irregular RC frames. Structures 2024, 64, 106524. [Google Scholar] [CrossRef]
- Gentile, R.; Galasso, C. Surrogate probabilistic seismic demand modelling of inelastic single-degree-of-freedom systems for efficient earthquake risk applications. Earthq. Eng. Struct. Dyn. 2022, 51, 492–511. [Google Scholar] [CrossRef]
- Kwon, O.S.; Elnashai, A. The effect of material and ground motion uncertainty on the seismic vulnerability curves of RC structure. Eng. Struct. 2006, 28, 289–303. [Google Scholar] [CrossRef]
- Kazemi, F.; Asgarkhani, N.; Jankowski, R. Machine learning-based seismic fragility and seismic vulnerability assessment of reinforced concrete structures. Soil Dyn. Earthq. Eng. 2023, 166, 107761. [Google Scholar] [CrossRef]
- EN 1998-1:2004; Eurocode 8: Design of Structures for Earthquake Resistance—Part 1: General Rules, Seismic Actions and Rules for Buildings. European Committee for Standardization: Brussels, Belgium, 2004.
- EN 1998-2:2005; Eurocode 8: Design of Structures for Earthquake Resistance—Part 2: Bridges. European Committee for Standardization: Brussels, Belgium, 2005.
- EN 1998-3:2005; Eurocode 8: Design of Structures for Earthquake Resistance—Part 3: Assessment and Retrofitting of Buildings. European Committee for Standardization: Brussels, Belgium, 2005.
- Pishro, A.A.; Zhang, S.; L’hOstis, A.; Hu, Q.; Liu, Y.; Zhang, Z.; Nguyen, V.D.; Fu, Y.; Li, T. Partial Differential Equations and Machine Learning Integration for Transit-Oriented Development. Appl. Soft Comput. J. 2025, 184, 113703. [Google Scholar] [CrossRef]
- Rabi, M.; Abarkan, I.; Sarfarazi, S.; Ferreira, F.P.V.; Alkherret, A.J. Automated design and optimization of concrete beams reinforced with stainless steel. Struct. Concr. J. Fib 2025, 1–24. [Google Scholar] [CrossRef]
- Asgarkhani, N.; Kazemi, F.; Jankowski, R. Machine-learning based tool for seismic response assessment of steel structures including masonry infill walls and soil-foundation-structure interaction. Comput. Struct. 2025, 317, 107918. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).