Next Article in Journal
An Application of Liouville–Caputo-Type Fractional Derivatives on Certain Subclasses of Bi-Univalent Functions
Previous Article in Journal
A Sufficient Condition for the Practical Stability of Riemann-Liouville Fractional Nonlinear Systems with Time Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device

1
School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
2
Faculty of Mechanical Engineering, Department of Mechanics, University of Belgrade, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(8), 504; https://doi.org/10.3390/fractalfract9080504
Submission received: 11 July 2025 / Revised: 26 July 2025 / Accepted: 29 July 2025 / Published: 31 July 2025

Abstract

Research on reliability control for enhancing power systems under random loads holds significant and undeniable importance in maintaining system stability, performance, and safety. The primary challenge lies in determining the reliability index while optimizing system parameters. To effectively address this challenge, we developed a novel intelligent algorithm and conducted an optimal reliability assessment for a Negative Stiffness Device (NSD) seismic isolation structure incorporating fractional-order damping. This algorithm combines the Gaussian Radial Basis Function Neural Network (GRBFNN) with the Particle Swarm Optimization (PSO) algorithm. It takes the reliability function with unknown parameters as the objective function, while using the Backward Kolmogorov (BK) equation, which governs the reliability function and is accompanied by boundary and initial conditions, as the constraint condition. During the operation of this algorithm, the neural network is employed to solve the BK equation, thereby deriving the fitness function in each iteration of the PSO algorithm. Then the PSO algorithm is utilized to obtain the optimal parameters. The unique advantage of this algorithm is its ability to simultaneously achieve the optimization of implicit objectives and the solution of time-dependent BK equations.To evaluate the performance of the proposed algorithm, this study compared it with the algorithm combines the GRBFNN with Genetic Algorithm (GA-GRBFNN)across multiple dimensions, including performance and operational efficiency. The effectiveness of the proposed algorithm has been validated through numerical comparisons and Monte Carlo simulations. The control strategy presented in this paper provides a solid theoretical foundation for improving the reliability performance of mechanical engineering systems and demonstrates significant potential for practical applications.

1. Introduction

Reliability, as a core indicator for evaluating the operational safety performance of power systems in complex vibration environments, plays a crucial role in ensuring the safety, functional stability, and durability of power systems, civil engineering structures, mechanical equipment, and aerospace infrastructure [1]. In recent years, with regard to extreme vibration scenarios such as earthquakes, research on reliability based on negative stiffness damping mechanisms has become a hot topic in the field of structural dynamics [2]. Under the condition of random load excitation, the core objective of reliability assessment for power systems is to quantify the probabilistic characteristics of system responses remaining within the safety domain, while reliability control aims to optimize and enhance reliability indicators by adjusting system parameters. Therefore, selecting appropriate control parameters is of great significance in achieving reliability objectives.
Within the analytical framework of stochastic dynamical systems, the probability density function is widely employed to characterize the statistical features of system uncertainties. Based on probabilistic distribution models, structural reliability assessment can be conducted using methods that include, but are not limited to, calculating the reliability function or failure probability of structures, or mean first-passage probability to analyze system failure mechanisms.
The concept of negative stiffness was first systematically proposed by Molyneaux in 1957 [3]. In his pioneering research, he designs a nonlinear vibration isolator that achieves the negative stiffness effect by constructing negative stiffness elements (NSE) using two symmetrically arranged inclined helical springs. Subsequently, Platus [4] further expanded the theoretical framework of negative stiffness mechanisms and developed a series of negative stiffness vibration isolation systems based on pre-compressed rods and preloaded flexible supports, covering vertical, horizontal, and six-degree-of-freedom vibration isolation systems. Carrella [5] and Mizuno [6] systematically demonstrated the remarkable effectiveness of this technology in vibration suppression through the experimental validation of negative stiffness vibration isolation systems. As research progressed, the principle of negative stiffness gradually permeated into the field of civil engineering. Ji [7] combined the negative stiffness mechanism with base isolation technology and proposes a novel composite isolation system that parallels a negative stiffness isolation layer with a traditional isolation system, conducting theoretical analysis and numerical simulation on its dynamic characteristics. Wu et al. [8] introduced negative stiffness magnetorheological dampers into the field of structural seismic response control, established a theoretical analysis model based on response spectra, and verified their seismic performance through real-time hybrid testing. Currently, various typical negative stiffness device schemes have been developed in research, including NSD-using Pre-loaded Elastic Elements [9], NSD-using Magnets [10], and Negative Stiffness Inter Damper [11]. Among these, the pre-compressed spring-type negative stiffness isolation system has demonstrated broad application prospects in engineering isolation due to its simple structure and reliable performance [12,13].
In recent years, the application of fractional derivative theory in the analysis of dynamical systems has garnered widespread attention. Compared with traditional integer-order models, the fractional derivative model has three advantages. Firstly, fractional derivatives can capture the memory effect and nonlocality of systems, effectively describing the correlation between the current state of a system and its historical inputs. This characteristic holds significant advantages in fields requiring the consideration of time-accumulation effects, such as the mechanics of viscoelastic materials [14]. Secondly, this theory provides a new mathematical tool for modeling the damping characteristics of complex systems, particularly in simulating the nonlinear viscous behavior of materials and non-continuous connection methods of structures, thereby breaking through the theoretical limitations of integer-order models [15]. Finally, the fractional derivative framework demonstrates stronger adaptability to nonlinear dynamical systems, enabling it to handle dynamic characteristics that traditional models cannot describe. It exhibits excellent robustness in engineering fields such as active vibration control and broadband noise suppression [16]. The characteristics make fractional derivative theory an important mathematical tool for analyzing the dynamic behaviors of complex dynamical systems, providing new theoretical support for the precise modeling and high-performance control of multi-physics coupled systems.
Reliability analysis, as a core approach for the safety assessment of engineering structures, has developed various mature theoretical frameworks. Mainstream analysis methods include the reliability index method [17], performance measurement approach [18], and sequential optimization and reliability assessment method [19]. In the field of earthquake engineering, due to the significant stochastic process characteristics of seismic excitation, reliability assessment methods based on first-passage have emerged as effective tools for quantifying the safety performance of control systems. By setting predefined safety threshold boundaries, this method can accurately characterize the dynamic reliability features of structures under random vibrations [20]. Recent research advancements have further deepened the application in this area: Taflanidis et al. [21] developed an efficient reliability design framework for tuned column damper systems, significantly enhancing control effectiveness under seismic action; Marano et al. [22] introduced constrained reliability theory into the optimization design of tuned mass dampers, achieving coordinated optimization of structural vibration control and reliability indicators; Mishra et al. [23] combined the first-passage model with base-isolated structures to establish a reliability-based structural optimization problem.
One of the most challenging core issues in reliability assessment under random loading conditions lies in solving the reliability control equations for nonlinear stochastic dynamical systems. To address this, it is necessary to construct system reliability models using probability theory and stochastic process theory [24], and derive the corresponding Backward Kolmogorov (BK) equations based on nonlinear stochastic dynamics and stochastic averaging principles. The BK equation is a parabolic time-varying partial differential equation (PDE) with specific initial and boundary conditions. Due to the existence of both first and second-order derivative terms in the equation, it is difficult to obtain exact analytical solutions. Currently, various numerical methods have been developed for solving the BK equation, including the finite difference method (FDM) [25,26], cell mapping method (CM) [27,28], and path integral method (PIM) [29,30], among others. However, these traditional methods all have inherent limitations. Therefore, there is a need to develop more efficient and robust numerical methods to achieve accurate solutions of the BK equation at arbitrary transient time steps, thereby breaking through the performance bottlenecks of existing technologies and providing more reliable theoretical support for the reliability analysis and control of nonlinear stochastic dynamical systems.
In recent years, with the breakthrough advancements in artificial intelligence technology, numerical methods based on three-layer Gaussian Radial Basis Function Neural Networks (GRBFNN) have demonstrated significant advantages in solving equations for complex stochastic dynamical systems. In 2023, Wang [31] applied GRBFNN to solve the Backward Kolmogorov (BK) equation in reliability control equations, exploring the probabilistic distribution characteristics of reliability functions for both linear and nonlinear dynamical systems, and validating the feasibility of this method in handling time-varying partial differential equations. In the same year, Li et al. [32] further expanded the application boundaries of GRBFNN by simultaneously obtaining both the time-varying reliability function and the mean first passage time for a class of stochastic dynamical systems’ BK equations. Chen [33] applied this method to study the dynamic behavior evolution of vibration-impact systems. By analyzing qualitative changes in probability distributions, he observed patterns of transient response variations and stochastic P-bifurcation phenomena. These works indicate that GRBFNN is an efficient algorithm with higher accuracy and efficiency, and even shorter runtime. It can be employed to investigate reliability control equations of systems under random loading conditions without introducing any controllers or optimization strategies to enhance reliability performance, focusing instead on theoretical exploration [34,35].
In reliability assessments under random loading conditions, another key challenge lies in maximizing reliability probability through specific reliability optimization methods under the BK equation and other constraints. The existing mainstream approaches primarily include the variational method [36] and the maximum principle [37,38]. The core advantage of the variational method is its ability to transform control problems into functional extremum problems, allowing for analytical solutions under ideal conditions. Meanwhile, the maximum principle converts optimal control problems into functional extremum conditions by constructing an augmented Hamiltonian system, providing the necessary conditions for stochastic optimal control. It is particularly noteworthy that traditional reliability optimization methods typically impose strict assumptions on the objective or value functions and heavily rely on solving the Euler–Lagrange equations or the Hamilton–Jacobi–Bellman (HJB) equations, facing significant computational challenges when dealing with high-dimensional or strongly nonlinear systems. Furthermore, the resulting control strategies often suffer from issues such as insufficient interpretability and limited engineering applicability. Therefore, there is a need to develop a novel optimization algorithm that minimizes theoretical complexity and numerical implementation difficulty while possessing the capability to perform stochastic reliability control for strongly nonlinear systems.
With advancements in machine learning and computer technology, the integration of traditional optimization algorithms with neural networks has demonstrated significant practicality in the field of reliability optimization. Cheng et al. [39,40] proposed an artificial neural network (ANN)-based genetic algorithm (GA) framework for reliability assessment of engineering structural systems: this method generates training datasets using the uniform design method, employs ANN to fit an explicit objective function, and combines it with an improved GA to achieve efficient estimation of failure probabilities. Gomes et al. [41] introduced a hybrid approach integrating GA with two types of neural networks (including a multilayer perceptron, MLP) for the reliability analysis of laminated composite structures, where the MLP is used to construct an explicit objective function for the GA, enabling the minimization of the total thickness of laminated composite plates through GA, with final reliability analysis performed using the First-Order Second-Moment (FOSM) method. Achraf Nouri et al. [42] combined ANN with the Particle Swarm Optimization (PSO) algorithm to develop a power extraction strategy for photovoltaic systems that maintains stable operation under varying light conditions, effectively optimizing the battery charging and discharging processes while significantly reducing power losses during DC-AC conversion. Zuriani et al. [43] introduced a PSO-NN model for predicting the remaining useful life (RUL) of batteries, with research results showing that the model consistently achieves the lowest mean absolute error (MAE = 2.7708) and root mean square error (RMSE = 4.3468) in tests, significantly outperforming comparison models such as CA-NN, HSA-NN, and ARIMA. Dong et al. [44] proposed a model for the classification and integration of innovation and entrepreneurship education resources based on Graph Neural Networks (GNN) and PSO. By leveraging GNN to uncover inherent relationships among educational resources and combining it with PSO for optimization of the classification process, the model achieves a remarkable classification accuracy of 92.5% and reduces processing time by 40%, effectively enhancing the management efficiency of educational resources.
Unlike existing research, the authors have previously proposed a GA-GRBFNN algorithm [45]. By embedding the GRBFNN within the framework of the GA, this approach effectively addresses the challenge of the implicit objective function during optimization and achieves theoretical reliability estimation based on stochastic dynamics theory and stochastic averaging principles. However, there remains room for improvement in terms of computational efficiency and solution accuracy for this method. Thus, this paper further introduces the PSO-GRBFNN method, which systematically optimizes the preceding approach from three dimensions: solution complexity, computational accuracy, and operational efficiency, while retaining the capability of GA-GRBFNN to handle implicit objective functions. This fills a methodological gap in the field of intelligent reliability optimization under theoretically constrained scenarios. The core contribution of this paper lies in the innovative construction of a PSO-GRBFNN collaborative algorithm framework, whose technical advantages are manifested in the following three aspects:
1. Solving control equations;
2. Optimizing controller parameters;
3. Conducting reliability assessment and optimization under stochastic dynamic conditions.
This synergistic coupling eliminates the need for sequential optimization loops, thereby significantly enhancing the computational efficiency of reliability-based control design.
The chapter arrangement of this paper is as follows: Section 2 establishes the system dynamics model and completes mathematical modeling and equation derivation; Section 3 shows the performance metric system for reliability assessment and constructs the corresponding BK equation; Section 4 formulates the optimization problem model under reliability constraints, proposes the PSO-GRBFNN algorithm framework, and elaborates on its implementation process; Section 5 verifies the algorithm’s performance through numerical simulations, including solving for optimal parameters, analyzing the probability distributions of reliability functions and first-passage times, and conducting a comparative study between PSO-GRBFNN and GA-GRBFNN as well as a sensitivity analysis of key parameters; Section 6 is the conclusion.

2. Mathematical Model

In conventional seismic-isolated structures, the NSD system plays a vital role in enhancing the vibration isolation performance. Fractional-order damping has demonstrated widespread applicability in rheology, viscoelasticity, and automatic control engineering. Physically, it serves as an intermediary modeling paradigm that bridges the gap between classical integer-order derivatives, enabling the characterization of memory-dependent and hereditary phenomena. Motivated by this theoretical advantage, the present study systematically integrates fractional damping operators into the framework of the NSD system to achieve more precise dynamic control [15]. Thus, we have the novel NSD seismic isolation structure with fractional-order damping characteristics, as illustrated in Figure 1.
To facilitate a comprehensive understanding of the NSD mechanism, we present a concise introduction to its system architecture in Figure 2. Figure 2 shows the motion state diagram of NSD and the force analysis diagram of NSD.
In Figure 2, the deformation displacement Δ of the spring is given by the following equation:
Δ = L 0 2 + x 2 ( t ) L .
where L and L 0 denote the original length and compressed length of the spring, respectively; the spring force F n g can be expressed as follows:
F n g = k v x ( t ) [ Δ L 0 2 + x 2 ( t ) ] = k v x ( t ) [ 1 L ( L 0 2 + x 2 ( t ) ) 1 2 ] .
To simplify subsequent calculations, the spring force F n g is expanded using Taylor series:
F ˜ n g = k v 1 L L 0 x ( t ) + k v L 2 L 0 3 x 3 ( t ) 3 k v L 8 L 0 5 x 5 ( t ) + 5 k v L 16 L 0 7 x 7 ( t ) + O ( x 8 ( t ) ) .
Meanwhile, the absolute error between the Taylor-expanded force F ˜ n g and the original spring force F n g is calculated using e r r = | F n g F ˜ n g | . Consequently, we obtain the functional plots of Equations (2) and (3) under various Taylor expansion orders, along with their corresponding error distribution diagrams shown in Figure 3.
As illustrated in Figure 3, with the increase in the number of terms in the Taylor series expansion, its approximation accuracy to the original function improves, and the error exhibits a gradual decrease. Furthermore, to investigate the effects of spring stiffness, original length L 0 , and length L on NSD, we perform first-order differentiation on the original expression Equation (2), then we can obtain the following:
K n s d = k v [ 1 L L 0 2 + x 2 ( t ) 1 2 ] + k v x 2 ( t ) L L 0 2 + x 2 ( t ) 3 2 .
Therefore, we can obtain the following result plots.
It can be observed from Figure 4 that there is a correlation between the spring stiffness and the negative stiffness provided by NSD: as the spring stiffness gradually increases, the magnitude of the negative stiffness provided by NSD continuously decreases. Meanwhile, the initial spring length also has an impact on the negative stiffness. With the increase in L 0 , the range of action of the negative stiffness shows a tendency to expand, although the magnitude of the negative stiffness itself also keeps rising. However, from another perspective, when the length of L increases, the negative stiffness will decrease accordingly. Considering all these factors, the initial spring length is a crucial parameter that must be carefully considered in the design and application of negative stiffness devices.
Based on the brief introduction of the NSD system studied in this paper and integrating the analytical frameworks from References [7,32] with Figure 1, we are able to derive the equations of motion that the system follows, as detailed below:
m 1 x ¨ ( t ) + ( c 1 + c n g ) x ˙ ( t ) + D α x ( t ) + k 1 x ( t ) + F ˜ n g = m 1 x ¨ g ( t ) ,
in which, m 1 represents the mass of the isolation structure; c 1 and k 1 are the total damping coefficient and total stiffness coefficient of the isolation bearings, respectively; c n g and k v correspond to the damping coefficient and negative stiffness coefficient of the negative stiffness device; x ¨ g denotes the random excitation; x is the displacement of the isolation structure relative to the ground; D α x represents fractional-order damping. For the convenience of checking the meanings of notations, we have summarized the main notation in Appendix A. Next, we will conduct dimensionless processing on the aforementioned parameters and variables.
ω 1 = k 1 m 1 , ξ 1 = c 1 2 m 1 ω 1 , ξ n g = c n g 2 m 1 ω 1 , α n g = k v k 1 , γ n g = L 0 L , θ n g = ξ n g ξ 1 .
Based on the aforementioned processing, the dimensionless control equation can ultimately be derived, with its specific form presented as follows:
x ¨ ( t ) + 2 1 + θ n g ξ 1 ω 1 x ˙ ( t ) + D α m 1 x ( t ) + ω 1 2 1 + α n g ( 1 1 γ n g ) x ( t ) + ω 1 2 α n g 2 γ n g L 0 2 x 3 ( t ) 3 ω 1 2 α n g 8 γ n g L 0 4 x 5 ( t ) + 5 ω 1 2 α n g 16 γ n g L 0 6 x 7 ( t ) = x ¨ g ( t ) .
For convenience in subsequent derivation, Equation (7) is now simplified and recorded as follows:
x ¨ ( t ) + h ( x ( t ) , x ˙ ( t ) ) + D α m 1 x ( t ) + g ( x ) = x ¨ g ( t ) ,
that is
h ( x ( t ) , x ˙ ( t ) ) = 2 1 + θ n g ξ 1 ω 1 x ˙ ( t ) ,
g ( x ) = ω 1 2 1 + α n g ( 1 1 γ n g ) x ( t ) + ω 1 2 α n g 2 γ n g L 0 2 x 3 ( t ) 3 ω 1 2 α n g 8 γ n g L 0 4 x 5 ( t ) + 5 ω 1 2 α n g 16 γ n g L 0 6 x 7 ( t ) .
And the definition of the fractional-order derivative is given as follows:
D α x ( t ) = 1 Γ ( n α ) 0 t ( τ ) n α 1 x ( n ) ( t τ ) d s , n 1 < α < n N x ( n ) ( t ) , α = n N .
Here, Γ ( x ) represents the gamma function, and its specific definition is Γ ( x ) = 0 t x 1 e t d t . What is more, it can be clearly observed from Equation (7) that the system is a random system with a strong nonlinear restoring force and fractional-order damping characteristics. According to the theory of quasi-Hamiltonian system [46], a generalized transformation can be utilized to convert the fast variables x ( t ) of the system into slow variables a ( t ) , ϕ ( t ) . The transformation equations are as follows:
x ( t ) = a ( t ) cos θ ( t ) x ˙ ( t ) = a ( t ) v ( a , θ ) sin θ ( t ) θ ( t ) = Φ ( t ) + γ ( t ) ,
v ( a , θ ) = d Φ d t = 2 [ U ( a ) U ( a cos θ ) ] a 2 sin 2 θ = b 0 ( a ) + i = 1 b i ( a ) cos i θ .
In which U ( x ) = g ( x ) d x . By integrating θ in Equation (13) from 0 to 2 π , the average frequency of the system can be obtained:
ω ¯ ( a ) = b 0 ( a ) ω ¯ a .
Thus, we can obtain the approximate expression of θ ( t ) , θ ( t ) = ω ¯ a t + γ ( t ) . Based on the harmonic transformation (12), we can derive the following:
a ˙ cos θ γ ˙ sin θ = 0 .
In order to obtain the expression of x ¨ ( t ) , we need to differentiate the equation x ˙ ( t ) = a ( t ) v ( a , θ ) sin θ ( t ) with respect to t, then we have the following:
x ¨ ( t ) = a ˙ v ( a , θ ) sin θ ( t ) a v ( a , θ ) a a ˙ sin θ ( t ) a v ( a , θ ) θ ( v ( a , θ ) + γ ˙ ) sin θ ( t ) a ( t ) v ( a , θ ) cos θ ( t ) ( v ( a , θ ) + γ ˙ ) .
Obviously, to solve the above equation, we must first calculate these two partial derivatives with respect to v ( a , θ ) a and v ( a , θ ) θ .
v ( a , θ ) a = [ g ( a ) g ( a cos θ ) ] cos θ a 2 sin 2 θ v ( a , θ ) v ( a , θ ) a ,
v ( a , θ ) θ = g ( a cos θ ) a sin θ v ( a , θ ) v ( a , θ ) cos θ sin θ .
By substituting Equations (17) and (18) into Equation (16) and rearranging, we can obtain the following:
x ¨ ( t ) = a ˙ [ g ( a ) g ( a cos θ ) cos θ ] a sin θ v ( a , θ ) g ( a cos θ ) γ ˙ g ( a cos θ ) v ( a , θ ) .
Then, taking Equations (12), (15) and (19) into Equation (7), we can derive that
a ˙ = a sin θ v ( a , θ ) g ( a ) h ( x ( t ) , x ˙ ( t ) ) + a sin θ v ( a , θ ) g ( a ) D α m 1 x ( t ) a sin θ v ( a , θ ) g ( a ) x ¨ g ( t ) .
γ ˙ = cos θ v ( a , θ ) g ( a ) h ( x ( t ) , x ˙ ( t ) ) + cos θ v ( a , θ ) g ( a ) D α m 1 x ( t ) cos θ v ( a , θ ) g ( a ) x ¨ g ( t )
For the convenience of subsequent derivation and representation, the following notational simplifications are made:
F 1 = a sin θ v ( a , θ ) g ( a ) h ( x ( t ) , x ˙ ( t ) ) + a sin θ v ( a , θ ) g ( a ) D α m 1 x ( t ) , G 1 = a sin θ v ( a , θ ) g ( a ) , F 2 = cos θ v ( a , θ ) g ( a ) h ( x ( t ) , x ˙ ( t ) ) + cos θ v ( a , θ ) g ( a ) D α m 1 x ( t ) , G 2 = cos θ v ( a , θ ) g ( a ) .
At this point, based on the stochastic averaging procedure [47], we can derive the following I t o ^ stochastic differential equation.
d a = m ( a ) d t + σ ( a ) d B ( t ) ,
m ( a ) = F 1 + D G 1 a G 1 + D G 1 γ G 2 θ ,
σ 2 ( a ) = 2 D G 1 2 θ = D a 2 v 2 ( a , θ ) g ( a ) 2 .
The process undergone by the variable θ ( t ) is a slow-varying process. Thus, the Taylor expansion can be performed on it, leading to the following result:
θ ( t τ ) = θ ( t ) ω ¯ ( a ) τ .
Based on this, we proceed to calculate the time-averaged expressions for the drift coefficient and the diffusion coefficient, with the specific calculation process as follows. First, we perform time-averaging calculations on the fractional derivative part:
a sin θ v ( a , θ ) g ( a ) D α m 1 x ( t ) θ = 1 g ( a ) m 1 lim T 1 T 0 T D α ( a cos θ ) a sin θ v ( a , θ ) d t = 1 g ( a ) m 1 lim T 1 T Γ ( 1 α ) 0 T ( 0 t x ( t τ ) τ α d τ ) a sin θ v ( a , θ ) d t = 1 Γ ( 1 α ) g ( a ) m 1 lim T 1 T 0 T a sin θ v ( a , θ ) d ( 0 t x ( t τ ) τ α d τ ) = 1 Γ ( 1 α ) g ( a ) m 1 lim T { 1 T a sin θ v ( a , θ ) 0 t x ( t τ ) τ α d τ | 0 T 1 T 0 T ( 0 t x ( t τ ) τ α d τ ) d ( a sin θ v ( a , θ ) ) d t d t } .
From the existing literature [48], we can obtain the following expression:
0 t sin ( ω ¯ a s ) s α d s = ω ¯ a α 1 [ Γ ( 1 α ) cos π α 2 cos ω 1 t ( ω 1 t ) α + o ( ω 1 t ) α ] ,
0 t cos ( ω ¯ a s ) s α d s = ω ¯ a α 1 [ Γ ( 1 α ) sin π α 2 sin ω 1 t ( ω 1 t ) α + o ( ω 1 t ) α ] .
By using Equations (28) and (29), we can derive that
lim T 1 T a sin θ v ( a , θ ) 0 t x ( t τ ) τ α d τ | 0 T lim T 1 T ( a v ( a , θ ) sin θ cos θ 0 T cos ( ω ¯ a τ ) τ α d τ + a v ( a , θ ) sin 2 θ 0 T sin ( ω ¯ a τ ) τ α d τ ) lim T a v ( a , θ ) sin θ ω ¯ a α 1 T ( Γ ( 1 α ) sin ( θ + α π 2 ) + sin ( ω 1 T θ ) ( ω 1 T ) α ) = 0 .
By substituting the conclusion derived from Equation (30) into Equation (27), we can further deduce:
a sin θ v ( a , θ ) g ( a ) D α m 1 x ( t ) θ = 1 g ( a ) m 1 Γ ( 1 α ) lim T 1 T 0 T a g ( a cos θ ) [ cos θ 0 t cos ω ¯ a τ τ α d τ + sin θ 0 t sin ω ¯ a τ τ α d τ ] d t a g ( a ) m 1 2 π ω ¯ a 1 α 0 2 π g ( a cos θ ) [ cos θ sin ( α π 2 ) + sin θ cos ( α π 2 ) d θ ] . ( 0 < α < 1 )
In Equation (13), U ( x ) represents the system’s energy, and its specific expression is as follows:
U ( x ) = ω 1 2 ( 1 + α n g ( 1 1 γ n g ) ) x ( t ) + ω 1 2 α n g 2 γ n g L 0 2 x 3 ( t ) 3 ω 1 2 α n g 8 γ n g L 0 4 x 5 ( t ) + 5 ω 1 2 α n g 16 γ n g L 0 6 x 7 ( t ) d x = 1 2 ω 1 2 ( 1 + α n g ( 1 1 γ n g ) ) x ( t ) 2 + ω 1 2 α n g 8 γ n g L 2 x 4 ( t ) 3 ω 1 2 α n g 48 γ n g L 0 4 x 6 ( t ) + 5 ω 1 2 α n g 128 γ n g L 6 6 x 8 ( t ) 1 2 K x 2 ( t ) + 1 4 K 2 x 4 ( t ) 1 6 K 3 x 6 ( t ) + 1 8 K 4 x 8 ( t ) ,
in which, K = ω 1 2 + ω 1 2 α n g ( 1 1 γ n g ) , K 2 = ω 1 2 α n g 2 γ n g L 3 2 , K 3 = 3 ω 1 2 α n g 8 γ n g L 0 4 , K 4 = 5 ω 1 2 α n g 16 γ n g L 0 6 . Thus, by using Equation (13), we can calculate the expression of v ( a , θ ) .
v ( a , θ ) = 2 [ 1 2 K ( a 2 a 2 cos 2 θ ) + 1 4 K 2 ( a 4 a 4 cos 4 θ ) 1 6 K 3 ( a 6 a 6 cos 6 θ ) + 1 8 K 4 ( a 8 a 8 cos 8 θ ) ] a 2 sin 2 θ = K + K 2 a 2 ( 1 + cos 2 θ ) 2 K 3 a 4 ( 1 + cos 2 θ + cos 4 θ ) 3 + K 4 a 6 ( 1 + cos 2 θ ) ( 1 + cos 4 θ ) 4 = [ η ( 1 + η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) ] 1 2 b 0 ( a ) + b 2 ( a ) cos 2 θ + b 4 ( a ) cos 4 θ + b 6 ( a ) cos 6 θ .
The full derivation is shown in Appendix B.1. Thus, we can obtain the average frequency ω ¯ a = b 0 ( a ) . Now, we define ω ¯ a v . Based on this definition, we are able to carry out the subsequent calculations of the drift function and the diffusion function and obtain the corresponding results:
m ( a ) = F 1 + D G 1 a G 1 + D G 1 γ G 1 + D G 1 γ G 2 θ = ( a v ) 2 g ( a ) ( 1 + θ n g ) ξ 1 ω 1 a ( K a 8 + 3 8 K 2 a 3 5 16 K 3 a 5 + 35 128 K 4 a 7 ) g ( a ) m 1 ω ¯ a 1 α + D [ a 2 v v 2 g ( a ) 2 + a v 2 g ( a ) 2 a 2 v 2 g ( a ) 2 g ( a ) 3 ] .
σ 2 ( a ) = 2 D G 1 2 θ = D a 2 v 2 g ( a ) 2 .
And the related expressions are shown in Appendix B.2:

3. Reliability Index and Associated Equations

Suppose the safety range of the system’s amplitude is defined by the closed interval Ω = [ 0 , a ] R . Under this assumption, the time-varying reliability function R ( t | a 0 , L 0 ) can be defined as the probability that, when the system’s initial amplitude is a 0 Ω , the system’s amplitude remains within the safety domain Ω after a time period t ( 0 , T ] .
R ( t | a 0 , L 0 ) = p { a 0 ( t ) Ω , t ( 0 , T ] | a 0 ( t ) Ω } .
To derive the differential equation satisfied by R ( t | a 0 , L 0 ) , we introduce the conditional transition probability density q ( a , t | a 0 , L 0 ) . This conditional transition probability density describes the transition probability density of sample functions for which the system process remains consistently within the safety domain Ω over the time interval ( 0 , T ] , and it satisfies the Backward Kolmogorov (BK) equation [49]:
q ( a , t | a 0 , L 0 ) t = L B K [ q ( a , t | a 0 , L 0 ) ] ,
L B K is an elliptic operator, and its specific expression is as follows:
L B K = m ( a 0 ) a 0 + 1 2 σ 2 ( a 0 ) 2 a 0 2 .
As demonstrated by the formulas presented earlier, the expressions for m ( a 0 ) and σ 2 ( a 0 ) have been clearly given. The conditional reliability function refers to the proportion of the number of samples for which the system remains consistently within the safety domain Ω over the time interval [ 0 , T ] to the total number of samples. Therefore, we can obtain the following:
R ( t | a 0 , τ ) = Ω q ( a , t | a 0 , L 0 ) d a .
By performing the integration operation on both sides of the equation with respect to the variable a over the safety domain Ω , we can derive that the time-varying reliability function R ( t | a 0 , L 0 ) satisfies the BK equation, and this equation is a parabolic backward partial differential equation.
R ( t | a 0 , L 0 ) t = m ( a 0 ) a 0 R ( t | a 0 , L 0 ) + 1 2 σ 2 ( a 0 ) 2 a 0 2 R ( t | a 0 , L 0 ) .
Moreover, this equation is required to satisfy the following initial and boundary conditions:
R ( 0 | a 0 , L 0 ) = 1 , a 0 Ω R ( t | a 0 , L 0 ) = p 1 , a 0 = 0 R ( t | a 0 , L 0 ) = 0 , a 0 Ω .
Here, p 1 is a probability constant, and it satisfies 0 p 1 1 . Meanwhile, for the probability p T ( t | a 0 , L 0 ) regarding the first-passage time, there exists the following calculation expression:
p T ( t | a 0 , L 0 ) = R ( t | a 0 , L 0 ) t | t = T ( a 0 ) .

4. Optimization Problem Formulation and PSO-GRBFNN Algorithm Framework

This paper aims to achieve the highest possible level of system reliability by solving for the optimal system parameters. Thus, the following reliability optimization problem is established:
max arg L 0 R ( t | a 0 , L 0 ) , t [ 0 , T ]
s . t . t R ( t | a 0 , L 0 ) = L BK R ( t | a 10 , L 0 ) R ( t | a 10 , L 0 ) = p 1 , a 10 = 0 R ( t | a 10 , L 0 ) = 0 , a 10 Ω R ( 0 | a 10 , L 0 ) = 1 , a 10 Ω
In the formulated optimization problem, the parameter L 0 is unknown and its value needs to be determined by maximizing the reliability function. Moreover, this process is subject to the constraints imposed by the BK equation, along with its corresponding initial and boundary conditions. In this problem, the objective function exhibits a non-explicit characteristic, and its solution is highly dependent on the solution of the BK equation. Given that conventional optimization algorithms struggle to tackle such complex problems, in previous research, we proposed the use of the GA-GRBFNN algorithm to address this issue [45]. Although the GA-GRBFNN algorithm can effectively handle the problem, as the number of iterations increases and the population size expands, its operational efficiency gradually declines and its performance becomes sluggish. Based on this, this paper proposes the PSO-GRBFNN algorithm to overcome the aforementioned difficulties. The overall framework and ideas of the two algorithms are shown in Figure 5.
As can be seen from Figure 5, the GRBFNN algorithm serves as the core part of these two algorithms, undertaking the task of calculating the fitness function. Moreover, compared with the GA algorithm, the PSO algorithm has a simpler overall process and requires the generation of a relatively smaller amount of data. Since the GA-GRBFNN algorithm has been elaborated in detail in previous studies, this paper will focus on elaborating the process of the PSO-GRBFNN algorithm.
Step 1: Initialization
Within the domain of definition of L 0 , randomly generate the initial position set X = ( X 1 , X 2 , , X N s ) and the initial velocity set V = ( V 1 , V 2 , , V N s ) for N s particles. Meanwhile, record the current individual position of each particle as its individual best position P b , and randomly select the best position from all particles as the initial global best position G b .
Step 2: Fitness Function Design
The selection of the fitness function plays a critically decisive role in the performance of the PSO algorithm. Meanwhile, the complexity of the fitness function also impacts the complexity of the algorithm. In the PSO-GRBFNN algorithm, this research innovatively embeds the GRBFNN into the PSO framework to calculate the fitness function for each particle. During each iteration of the algorithm, PSO and GRBFNN work in close collaboration in a coupled manner. This improvement represents the most innovative core idea in our research work, aiming to effectively address the challenge of non-explicit expression of the objective function. Given that the reliability function has the characteristic of non-negativity and GRBFNN possesses the ability to solve the BK equation, we select the reliability function value calculated by GRBFNN as the fitness function value. When adopting this method, we need to make the following assumptions:
R ˜ ( t k | a 0 , w ( t k ) ) = i = 1 N w i ( t k ) G ( a 0 , u i , σ i ) , ( k = 1 , 2 , )
w t k = [ w 1 ( t k ) , w 2 ( t k ) , w N ( t k ) ] T represents the weight coefficient vector to be determined, where w i ( t k ) denotes the weight coefficient corresponding to the i-th kernel function at the k-th step. What is more, the kernel functions of GRBFNN are Gaussian functions, and a weighted sum of N kernel functions is employed to approximate the solution of the original equation. For the i-th kernel function, its mean and variance are given by μ i = ( i 1 2 ) Ω N and σ i = Ω N , respectively. The specific expression of the Gaussian kernel function is as follows:
G ( a 0 , μ i , σ i ) = j = 1 2 1 2 π σ i exp a 0 j μ i j 2 2 σ i 2 .
The initial amplitude serves as the sole input variable in the GRBFNN structure. Consequently, we only need to generate training data of size M for each iteration of PSO. Thus, the overall size of the training data is M s M . The training data are generated through uniform sampling within the defined domain. This method can effectively avoid the phenomenon of data clustering, ensuring the uniformity and rationality of data distribution. Additionally, the training data we generated are shown in Figure 6.
Furthermore, based on the findings in paper [31], the size of the training data should be at least four times the number of neurons, i.e.,  M = 4 N . Using these training data, each set is fed into the neural network. In the neural network, by substituting the approximate solution into Equation (40) the BK operator, we can obtain the error, denoted as E r r 1 ( a 0 , w ( t k ) ) , between the exact solution and the approximate solution of the system.
E r r 1 ( a 0 , w ( t k ) ) = Δ t × L B K R ˜ ( t k | a 0 , w ( t k ) ) R ˜ ( t k | a 0 , w ( t k ) ) + R ˜ ( t k | a 0 , w ( t k 1 ) ) = i = 1 N w i ( t k ) h i ( a 0 ) + i = 1 N w i ( t k 1 ) G ( a 0 , μ i , σ i ) .
where Δ t represents the time step, and  h i ( a 0 ) = Δ t × L B K [ G ( a 0 , μ i , σ i ) ] G ( a 0 , μ i , σ i ) . Additionally, the reliability function satisfies specific boundary conditions. Consequently, a second component is incorporated into the loss function to fulfill this requirement.
E r r 2 ( a 0 , w ( t k ) ) = R ˜ ( t k | a 0 , w ( t k ) ) Ω
Therefore, the loss function associated with the solution of the BK equation should account for both the local error and the boundary condition simultaneously. Based on this, we construct the loss function as follows:
L o s s [ B K ] [ w ( t k ) , λ ( t k ) ] = 1 2 Ω [ E r r 1 ( a 0 , w ( t k ) ) ] 2 d a 0 + 1 2 λ ( t k ) Ω [ E r r 2 ( a 0 , w ( t k ) ) Ω ] d a 0
Here, λ ( t k ) is the Lagrange multiplier to be determined, Ω represents the safe domain for the variable a 0 , and  Ω denotes the boundary of this safe domain. Subsequently, based on the partition of the safe domain, we can discretize the loss function as follows:
L o s s [ B K ] [ w ( t k ) , λ ( t k ) ] = 1 2 j = 1 M i = 1 N l = 1 N h i ( a 0 j ) h l ( a 0 j ) w i ( t k ) w l ( t k ) + j = 1 M i = 1 N h i ( a 0 j ) w i ( t k ) R ˜ ( t k | a 0 j , w ( t k ) ) + 1 2 j = 1 M R ˜ ( t k 1 | a 0 j , w ( t k 1 ) ) 2 + λ ( t k ) j = 1 M 2 R ˜ ( t k 1 | a 0 j , w ( t k 1 ) ) = 1 2 w T ( t k ) [ H 0 + λ ( t k ) R b ] w ( t k ) + w T ( t k ) Q 0 w ( t k 1 ) + d 0 ( t k 1 ) .
In which G 1 = [ G i j ] 1 = [ G ( a 0 j , μ i , σ i ) ] , a ˜ 0 j Ω , H = [ h i ( a 0 j ) ] , H 0 = H H T ,   Q 0 = H G T , R b = G 1 G 1 T , d 0 ( t k 1 ) = 1 2 w T ( t k 1 ) . To minimize the error of the solution obtained by the neural network under the control of parameters w ( t k ) , λ ( t k ) , the following conditions need to be satisfied:
L o s s [ B K ] w ( t k ) , λ ( t k ) w ( t k ) = 0 , L o s s [ B K ] w ( t k ) , λ ( t k ) λ ( t k ) = 0 ,
By solving Equation (50), we can derive an iterative formula for the unknown weighted coefficients w ( t k ) , as detailed below:
w ( t k ) = Z l ( t k )
l ( t k ) = ( Z T H 0 Z ) 1 ( Z T Q 0 Z ) l ( t k 1 )
Among them, Z belongs to the null subspace of R b . Subsequently, based on the neural network constructed with the aforementioned weights, we can obtain the time-varying reliability functions. In this paper, we adopt the average reliability function value corresponding to each parameter to be optimized as the fitness function value for GA.
R ¯ ( τ ) = 1 T k = 1 T Δ t R ( t k | a 0 ) , τ = τ 1 , τ 2 τ N s .
Step 3: Iterative Update
Update of the individual best position P b : If the fitness value corresponding to the current particle is superior to the particle’s current individual best fitness value, then update the individual best value of this particle. The specific update rules are as follows:
P i b = P i b R ¯ ( L 0 i ) < R ¯ ( P i b ) L 0 i R ¯ ( L 0 i ) R ¯ ( P i b ) .
Update of the global best position G b : If there exists a particle in the current particle swarm whose fitness value is superior to the currently recorded optimal fitness value of the entire swarm, then update the global best value of the current particle swarm. The specific update rules are as follows:
G b = P i b R ¯ ( P i b ) G b G b R ¯ ( P i b ) < G b .
Update of velocity and position:
Regarding the two crucial parameters of velocity and position, their update process can be referred to in Figure 7.
By observing this figure, it can be clearly seen that the iterative calculations for velocity and position follow the following formulas:
V i t + 1 = ω V i t + c 1 r 1 ( P i b t L 0 i t ) + c 2 r 2 ( G b t L 0 i t ) , L 0 i t + 1 = L 0 i t + V i t + 1 .
Here, ω is the inertia weight, which is used to regulate the extent to which a particle inherits its previous velocity. r 1 and r 2 are random numbers within the range of [ 0 , 1 ] . c 1 and c 2 are learning factors that adjust the influence of individual experience and swarm experience on particle updates, with their values typically falling between 1 and 2.
In the iterative formula, the first part is the memory term, which generally reflects the impact of a particle’s previous velocity magnitude and direction on its current velocity. The second term is called the self-cognition term. It is a vector pointing from the current point to the particle’s own historical best position, representing the change trend of the particle based on its own experience. The third term is referred to as the swarm cognition term. It is a vector pointing from the current point to the global best position of the swarm, reflecting how particles adjust their states through collaboration and knowledge sharing. Particles determine their next move by comprehensively considering both their own experience and the swarm’s experience.
At this point, the particle swarm algorithm has completed one round of iteration, and the position parameters and velocity parameters ( L 0 , V ) are updated. Subsequently, the process repeats from Step 2 to Step 3 to continuously update and optimize the parameters. Eventually, the optimal control parameters that maximize the reliability function can be obtained. Algorithm 1 presents the pseudocode for the PSO-GRBFNN algorithm.
Algorithm 1 PSO-GRBFNN Algorithm
Require: 
Size of the particle swarm N S , Maximum iteration times for the PSO M S , Value range of L 0 , Inertia weight ω , Learning factors c 1 and c 2 , Random parameters r 1 and r 2 within the range [ 0 , 1 ] , Number of Gaussian radial basis functions N.
Ensure: 
The optimal values L 0 * and R ¯ .
  1:
Generating an initial population of size N S , setting the initial value of L 0 and the velocity V.
  2:
Setting the initial individual historical best position P b and the global historical best position G b .
  3:
while iteration times < M S  do
  4:
  Generating training data associated with amplitude a
  5:
  Employing the GRBFNN method to obtain the weights ω ( t k )
  6:
  Calculating the corresponding reliability values for each particle
  7:
  Updating P b and G b based on the fitness function values of each particle.
  8:
  Updating particle positions L 0 and velocities V using iterative formulas.
  9:
end while
10:
Selecting the particle with the best fitness as G b a
11:
return optimal L 0 * and R ¯ .
To facilitate a comparative analysis between the PSO-GRBFNN algorithm and the GA-GRBFNN algorithm, Algorithm 2 provides a systematic summary of the pseudocode for the GA-GRBFNN algorithm.
Algorithm 2 GA-GRBFNN Algorithm
Require: 
Size of the population N S , Maximum generations for the GA M S , Value range of L 0 , Crossover probability P c , Mutation probability P m , Roulette probability P s , Number of Gaussian radial basis functions N.
Ensure: 
The optimal values L 0 * and R ¯ .
  1:
Generating an initial population size N S , setting the initial values for the parameters L 0 in binary format
  2:
while iteration times < M S  do
  3:
  Generating training data associated with amplitude a
  4:
  Employing the GRBFNN method to obtain the weights ω ( t k )
  5:
  Calculating the corresponding reliability values for each individual
  6:
  Evaluating the fitness function R ¯ ( L 0 ) for each individual within the population
  7:
  Conducting selection based on an improved roulette wheel selection strategy
  8:
  Performing arithmetic crossover to generate offspring with probability P c
  9:
  Applying mutation to introduce minor changes with probability P m
10:
end while
11:
Selecting the individual with the best fitness as the optimal solution
12:
Extracting the optimal L 0 values from the optimal individual
13:
return Optimal L 0 * and R ¯ .

5. Numerical Simulation

In this section, we employ the GA-GRBFNN algorithm and the PSO-GRBFNN algorithm to conduct exploratory research on the optimal reliability probability of the system. Meanwhile, we utilize the Monte Carlo simulation method to comparatively validate the effectiveness and accuracy of these two proposed algorithms and engage in an in-depth discussion on their performance.
Under the given parameter conditions where ω 1 = 2 , γ n g = 0.6 , L = 1 , θ = 1 ,   ξ = 0.15 ,   m 1 = 4 , 700 , 000 , α = 0.5 , and D = 0.5 , through calculations using the GA-GRBFNN algorithm with N S = 80 , M S = 100 , P c = 0.6 , P m = 0.005 and ϵ = 0.0001 , we can obtain L 0 = 0.66795 ; by employing the PSO-GRBFNN algorithm with N S = 20 , M S = 50 ,   c 1 = 1.5 ,   c 2 = 1.5 , ω = 0.729 , we obtain L 0 = 0.66427 . It is evident that both algorithms can effectively solve for the optimal parameters of the system.
From Figure 8, it can be clearly observed that under optimal parameter conditions, the reliability function values corresponding to the two methods are extremely close, with the maximum gap between them being merely at the order of magnitude of 10 3 . Moreover, the average reliability values derived from the results of both methods are close to the high level of 0.99 . Additionally, it is evident that the reliability function value corresponding to the optimal parameters obtained using the PSO-GRBFNN algorithm is slightly higher than that corresponding to the optimal parameters obtained through the GA-GRBFNN algorithm.
As can be clearly seen from Figure 9, under the optimal parameter L 0 * = 0.66427 obtained by the PSO-GRBFNN algorithm, the system demonstrates high reliability. Moreover, as L 0 increases, the system reliability continuously decreases.
In the display of Figure 9, the solid line represents the solution results based on the GRBFNN algorithm, while the scattered points denote the solutions obtained through Monte Carlo Simulation (MCS). During the MCS process, we employ a large amount of random data to solve Equation (40) using the fourth-order Runge–Kutta method. Meanwhile, we take into account the boundary condition that the reliability function satisfies. Based on this, we conducted a statistical analysis on the simulation results and calculated the final outcomes accordingly. It is evident from the figure that the curves derived from the GRBFNN algorithm and MCS exhibit a high degree of fit, showing good consistency between the two. This phenomenon fully proves that the GRBFNN algorithm possesses high accuracy and reliability in solving such problems, effectively validating the algorithm’s effectiveness.
On the other hand, we conducted an in-depth analysis of the optimal parameter L 0 * = 0.66427 obtained using the PSO-GRBFNN algorithm. From Figure 9a, it can be observed that the L 0 * obtained through the algorithm proposed in this paper can indeed enable the system to achieve an optimal control state. Further observation reveals that over time, the system’s reliability value shows a continuous downward trend. However, when the selected time-delay parameter is not the optimal control parameter, the rate of decline in the reliability value significantly accelerates.
Figure 9b analyzes the situation from the perspective of the first-passage probability, which equally validates the superiority of the solved optimal control parameter L 0 * . When the system adopts non-optimal parameters, it exhibits the following variation pattern: in a relatively short time, the probability of the system experiencing a first passage is relatively high; as time progresses, the probability of first passage continuously decreases. This implies that the system faces a high risk of first passage in the short term, which may lead to a sharp decline in system performance or even failure. Conversely, when the solved optimal parameter L 0 * is adopted, the system can better maintain a low first-passage probability, thereby ensuring better stability and reliability of the system.
Therefore, the above conclusions all indicate that the PSO-GRBFNN algorithm demonstrates significant effectiveness in finding optimal control parameters. That is, the algorithm can relatively accurately locate the parameters that optimize system performance, thereby effectively enhancing the overall reliability and stability of the system.
To further explore the degree of agreement between the results obtained by the GRBFNN algorithm and MCS, we conducted an error analysis of the system, and the specific analysis result is shown in Table 1.
Regarding the error condition between the method proposed in this paper and MCS, we selected the three line segments displayed on Figure 9a as typical cases to analyze the results obtained from MCS and GRBFNN. After corresponding calculations, the data of the minimum error, maximum error, and average error were all listed in detail in Table 1.
It can be clearly seen from the data in the table that the overall average error is 0.0060 , the maximum error does not exceed 0.03 , and the minimum error is as low as the order of 10 3 . This series of error data strongly indicates that the results obtained by GRBFNN are in high agreement with those from MCS, demonstrating relatively high accuracy and validity. This fully verifies the outstanding capability of GRBFNN in solving such problems.
In addition, we have also conducted an in-depth exploration of the impact of the initial amplitude a 0 on system reliability, covering its effects on the reliability function and the probability of the mean first-passage time. Relevant content can be found in Figure 10.
As can be clearly seen from Figure 10a, with the continuous increase in the initial amplitude, the reliability function exhibits a gradual downward trend. Moreover, when the initial amplitude gradually approaches the boundary of the safe region, this declining trend significantly accelerates. This implies that the larger the value of the initial amplitude a 0 , the lower the system’s reliability will be, and the higher the probability of the system crossing the boundary for the first time, thereby making it more likely to cause structural damage or failure.
These research findings fully demonstrate that the magnitude of the initial amplitude has a significant influence on the likelihood of the system crossing the boundary. By observing Figure 10b, it can be found that when the initial amplitude a 0 = 0.9 , the probability of the system crossing the boundary within a short period exceeds 50 % , while the probability of crossing the boundary over a longer period is relatively low. Conversely, when the initial amplitude a 0 = 0.5 , the probability of the system crossing the boundary within a short period is close to zero, and its probability of crossing the boundary in the long term remains relatively stable.
To more precisely and thoroughly evaluate the performance differences between the GA-GRBFNN and the PSO-GRBFNN, we carried out the following comparative analysis.
Through observation under the experimental conditions where the number of iterations M s = 100 and by changing the parameter of population size N s , it is found in Figure 11 that PSO-GRBFNN can successfully obtain the optimal solution with a small population size. Moreover, as the population size increases, the fluctuation range of the obtained optimal solution is relatively small, demonstrating stable performance. In stark contrast, the optimal solution of GA-GRBFNN exhibits significant volatility, and the quality of the optimal solution obtained under these conditions shows a certain gap compared with that of PSO-GRBFNN.
Through observation and analysis under the experimental setting, where the population size N s = 40 and by changing the parameter of population size M s , it can be seen in Figure 12 that PSO-GRBFNN demonstrates outstanding performance: it can rapidly converge to the optimal solution within a relatively small number of iterations, and the errors between results obtained from different solving attempts are extremely minimal, indicating high stability. In contrast, GA-GRBFNN exhibits notably greater volatility during the solving process. However, as the number of iterations gradually increases, this degree of volatility shows a trend of gradual reduction. Nevertheless, even under the same conditions, the quality of the optimal solution obtained by GA-GRBFNN still falls short to a certain extent when compared with the results solved by PSO-GRBFNN.
From the comparative data on running time in Figure 13, it can be observed that under the experimental condition of a fixed number of iterations, the overall running time required by PSO-GRBFNN is consistently less than that of GA-GRBFNN. When the initial population size is fixed, the running times of the two are relatively close in scenarios with a small number of iterations. However, as the number of iterations continues to increase, the time consumed by GA-GRBFNN rises significantly. This phenomenon clearly indicates that, in terms of solving problems, PSO-GRBFNN possesses certain advantages over GA-GRBFNN in terms of operational efficiency.
Finally, we conducted an analysis on the sensitivity of the number of neural network nodes to reliability. As shown in Figure 14, the lines in the graph represent the performance of the neural network in solving for reliability under different numbers of nodes, while the points represent the results obtained through MCS. It can be observed that when the neural network uses a relatively small number of nodes, its performance in solving for reliability is not satisfactory, and the errors are relatively large. However, when the number of nodes N in the neural network reaches or exceeds 104, the results obtained from the neural network are consistent with those from MCS. Moreover, as the number of neural network nodes further increases, the impact on the solution results becomes minimal.

6. Conclusions

The reliability optimization of NSD isolation systems with fractional-order damping under random excitation is a highly practical and meaningful topic. In this paper, a corresponding mathematical model is constructed for such systems, and the PSO-GRBFNN method is innovatively introduced to systematically investigate the impact of initial compression length on the optimal reliability of the NSD system.
Given the complexity of fractional-order damping, we successfully derive the BK equation satisfied by the reliability function using the generalized stochastic averaging method. On this basis, with the reliability function as the optimization objective and the BK equation along with its related constraints as limiting factors, an implicit optimization model is constructed. By leveraging the powerful solving and optimization capabilities of the PSO-GRBFNN algorithm, we obtain the optimal solution for the initial compression length. The study yields the following conclusions: the longer the compressed length of the system, the weaker its ability to control system reliability; when compressed to a certain length, the change in system reliability becomes no longer significant.
Secondly, by employing MCS to analyze the system reliability function and the mean first-passage probability, we fully demonstrate that the obtained time-delay control parameters can achieve optimal control of system reliability. Further comparison of the results obtained by the PSO-GRBFNN method with those from the previously proposed GA-GRBFNN method reveals that PSO-GRBFNN outperforms GA-GRBFNN in both solution effectiveness and efficiency. After an in-depth exploration of the relationship between algorithm performance and internal parameters, it is found that the number of neural nodes in the GRBFNN has a significant impact on the algorithm’s effectiveness. When the number of neural nodes reaches 104 or more, the algorithm performs even better.
This study provides a solid theoretical foundation for the reliability optimization of NSD isolation systems with fractional-order damping. Its core value lies in effectively enhancing the overall system reliability through optimization algorithms. The proposed PSO-GRBFNN method is not only applicable to the reliability optimization of coupled systems with controllers but also has broad applicability and can be applied to other similar reliability optimization and control problems in random dynamic systems. For example, in the aerospace field, aircraft are subject to random disturbances such as atmospheric turbulence. This method can be used to conduct in-depth analyses of the impact of these random disturbances on aircraft structures and related parameters, thereby improving aircraft safety and reliability. In wind power generation systems, reliability control methods can be employed to analyze the impact of random factors such as wind speed and direction on the system, optimize system design, and enhance system reliability and stability.
However, it must be pointed out that, like any modeling method, the method proposed in this study also has certain limitations. When dealing with high-dimensional parameter optimization or high-dimensional reliability research problems, one may face challenges such as a sharp increase in computational complexity and optimization difficulty. Therefore, in future research work, we will fully consider these limitations and actively explore more advanced solutions. Despite the aforementioned limitations, this study holds milestone significance in the field of non-explicit reliability optimization of random dynamic systems. Future research directions can focus on exploring other excellent hybrid optimization algorithms, such as the Simulated Annealing (SA) algorithm, as well as other machine learning techniques, to improve the efficiency and accuracy of solutions to optimization problems. At the same time, we will strengthen research on the universality of algorithms to enable them to adapt to a wider range of practical application requirements and provide more powerful technical support for the reliability optimization of random dynamic systems.

Author Contributions

Conceptualization, M.L. and W.L.; methodology, M.L. and W.L.; software, M.L.; validation, M.L. and W.L.; formal analysis, M.L. and W.L.; investigation, M.L. and W.L.; resources, W.L.; data curation, M.L.; writing—original draft preparation, M.L. and W.L.; writing—review and editing, D.H. and N.T.; visualization, M.L.; supervision, W.L.; project administration, W.L.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (No. 12272283 and 12172266) and Foreign Expert Service Program of Shaanxi Province (No. 2025WZ-YBXM-13).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Main Notation list.
Table A1. Main Notation list.
NotationImplicationNotationImplication
m 1 the mass of the isolation structure D α x ( t ) fractional-order damping
x ¨ g random excitation F ˜ n g the spring force
c 1 total damping coefficient k 1 total stiffness coefficient
c n g the damping coefficient k v the negative stiffness coefficient
U ( x ) the system’s energy ω ¯ a the average frequency
m ( a ) the drift function σ 2 ( a ) the diffusion function
L B K the elliptic operator R ( t | a 0 , L 0 ) the reliable function

Appendix B

Appendix B.1

v ( a , θ ) = 2 [ 1 2 K ( a 2 a 2 cos 2 θ ) + 1 4 K 2 ( a 4 a 4 cos 4 θ ) 1 6 K 3 ( a 6 a 6 cos 6 θ ) + 1 8 K 4 ( a 8 a 8 cos 8 θ ) ] a 2 sin 2 θ = K + K 2 a 2 ( 1 + cos 2 θ ) 2 K 3 a 4 ( 1 + cos 2 θ + cos 4 θ ) 3 + K 4 a 6 ( 1 + cos 2 θ ) ( 1 + cos 4 θ ) 4 = [ η ( 1 + η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) ] 1 2 ,
in which
η = K + 3 4 K 2 a 2 5 8 K 3 a 4 + 35 64 K 4 a 6 , η 1 = 1 4 K 2 a 2 1 3 K 3 a 4 + 47 128 K 4 a 6 η , η 2 = 5 64 K 4 a 6 1 24 K 3 a 4 η , η 3 = K 4 a 6 128 η .
It is known that the Taylor series expansion of the function ( 1 + x ) 1 2 is as follows:
( 1 + x ) 1 2 = 1 + 1 2 x 1 8 x 2 + 1 16 x 3 ,
thus, letting x = η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ and taking it into Equation (A3), we can deduce the following:
( 1 + η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 1 2 = 1 + 1 2 ( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 1 8 ( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 2 + 1 16 ( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 3 .
Taking ( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 2 as an example, by applying trigonometric identities cos 2 α = 1 + cos 2 α 2 and cos α cos β = cos ( α + β ) + cos ( α β ) 2 to expand the expression and simplifying it, we can obtain the following result:
( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 2 = η 1 2 1 + cos 4 θ 2 + η 1 η 2 [ cos ( 6 θ ) + cos ( 2 θ ) ] + η 1 η 3 [ cos ( 8 θ ) + cos ( 4 θ ) ] + η 2 η 3 [ cos ( 10 θ ) + cos ( 2 θ ) ] + η 2 2 1 + cos 8 θ 2 + η 3 1 + cos 12 θ 2 .
By employing the same method as described previously, we are able to expand the expression ( η 1 cos 2 θ + η 2 cos 4 θ + η 3 cos 6 θ ) 3 . Subsequently, we perform the operation of combining like terms by merging the cosine terms with the same frequency in the expanded expression with the preceding one.
η 1 2 η 1 2 η 1 2 16 + ( η 1 2 η 1 2 η 1 2 η 1 η 2 8 ) cos 2 θ + ( η 1 2 η 2 2 η 1 2 η 1 2 16 ) cos 4 θ + ( η 1 2 η 3 2 η 1 2 η 1 η 2 8 ) cos 6 θ + ,
therefore, v ( a , θ ) can be approximately expressed as follows:
v ( a , θ ) b 0 ( a ) + b 2 ( a ) cos 2 θ + b 4 ( a ) cos 4 θ + b 6 ( a ) cos 6 θ .
It needs to be emphasized that
b 0 ( a ) = η 1 2 1 η 1 2 16 , b 2 ( a ) = η 1 2 η 1 2 η 1 η 2 8 , b 4 ( a ) = η 1 2 η 2 2 η 1 2 1 6 , b 6 ( a ) = η 1 2 ( η 3 2 η 1 η 2 8 ) .

Appendix B.2

v = v [ η 2 η η 1 η 1 8 ( ( 1 η 1 2 16 ) ) ] ,
η = 3 2 K 2 a 5 2 K 3 a 3 + 35 64 K 4 a 6 ,
η 1 = [ 1 2 K 2 a 4 3 K 3 a 3 + 282 128 K 4 a 5 ] η + 1 4 K 2 a 2 1 3 K 3 a 4 + 47 128 K 4 a 6 η 2 η .

References

  1. Dang, C.; Cicirello, A.; Faes, M.A.V.M.G.; Wei, P.; Beer, M. Structural reliability analysis with extremely small failure probabilities: A quasi-Bayesian active learning method. Probabilistic Eng. Mech. 2024, 76, 103613. [Google Scholar] [CrossRef]
  2. Wu, C.; Xu, J.; Zhang, C.; Wang, J. Overall seismic reliability analysis of aqueduct structure based on different levels under random earthquake. Structures 2023, 58, 105469. [Google Scholar] [CrossRef]
  3. Molyneux, W. Supports for Vibration Isolation; HM Stationery Office: London, UK, 1957. [Google Scholar]
  4. Platus, D.L. Negative-stiffness-mechanism vibration isolation systems. Proc. SPIE Int. Soc. Opt. Eng. 1999, 3786, 98–105. [Google Scholar]
  5. Carrella, A.; Brennan, M.J.; Waters, T.P. Demonstrator to show the effects of negative stiffness on the natural frequency of a simple oscillator. Proc. Inst. Mech. Eng. Part J. Mech. Eng. Sci. 2008, 222, 1189–1192. [Google Scholar] [CrossRef]
  6. Mizuno, T.; Toumiya, T.; Takasaki, M. Vibration isolation system using negative stiffness. JSME Int. J. Ser. Mech. Syst. 2003, 46, 807–812. [Google Scholar] [CrossRef]
  7. Han, J.; Xiong, S.; Yuan, Y. Analysis of the isolation effect of structures based on the principle of negative stiffness. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2010, 38, 76–79. [Google Scholar]
  8. Wu, B.; Shi, P.; Ou, J. Seismic performance of structures incorporating magnetorheological dampers with pseudo-negative stiffness. Struct. Control. Health Monit. 2013, 20, 405–421. [Google Scholar] [CrossRef]
  9. Nagarajaiah, S.; Pasalas, D.T.; Reinhorn, A.; Constantinou, M.; Sirilis, A.A.; Taylor, D. Adaptive negative stiffness: A new structural modification approach for seismic protection. Adv. Mater. Res. 2013, 639–640, 54–66. [Google Scholar] [CrossRef]
  10. Shi, X.; Zhu, S. Magnetic negative stiffness damperse. Smart Mater. Struct. 2015, 24, 072002. [Google Scholar] [CrossRef]
  11. Gao, H.; Wang, H.; Li, J.; Mao, J.; Wang, Z. Dynamic behavior and damping enhancement of cable with negative stiffness inerter damper. Int. J. Mech. Sci. 2022, 235, 107664. [Google Scholar] [CrossRef]
  12. Huang, X.; Liu, F.; Hu, Z. Dynamic characteristics and seismic response of base-isolated structures with negative stiffness devices. China Earthq. Eng. J. 2024, 46, 1021–1031. [Google Scholar]
  13. Chen, P.; Wang, B.; Dai, K.; Li, T. Analytical and numerical investigations of base isolation system with negative stiffness devices. Eng. Struct. 2022, 268, 114799. [Google Scholar] [CrossRef]
  14. Singh, K.; Saxena, R.; Kumar, S. Caputo-based fractional derivative in fractional fourier transform domain. IEEE J. Emerg. Sel. Top. Circuits Syst. 2013, 3, 330–337. [Google Scholar] [CrossRef]
  15. Hounnan, S.O.; Tuwa, P.R.N.; Miwadinou, C.H.; Monwanou, V.A. Non-linear resonances and chaotic dynamics of a rotating gyroscope under a fractional order derivative damping. Int. J. Theor. Phys. 2024, 63, 89. [Google Scholar] [CrossRef]
  16. Yildiz, B.; Sinir, S.; Sinir, B.G. A general solution procedure for nonlinear single degree of freedom systems including fractional derivatives. Int. J. Non-Linear Mech. 2015, 163, 104966. [Google Scholar]
  17. Nikolaidis, E.; Burdisso, R. Reliability based optimization: A safety index approach. Comput. Struct. 2004, 28, 225–233. [Google Scholar] [CrossRef]
  18. Tu, J.; Choi, K.K.; Park, Y.H. Anew study on reliability-based design optimization. Mech. Des. 1999, 121, 557–564. [Google Scholar]
  19. Du, X.; Wei, C. Sequential optimization and reliability assessment method for efficient probabilistic design. J. Mech. Des. 2004, 126, 225–233. [Google Scholar] [CrossRef]
  20. Melchers, R.E.; Beck, A.T. Structural Reliability Analysis and Prediction; John Wiley and Sons Ltd.: Hoboken, NJ, USA, 2017. [Google Scholar]
  21. Taflanidis, A.A.; Beck, J.L.; Angelides, D.C. Robust reliability-based design of liquid column mass dampers under earthquake excitation using an analytical reliability approximation. Eng. Struct. 2007, 29, 3525–3537. [Google Scholar] [CrossRef]
  22. Marano, G.C.; Greco, R.; Trentadue, F.; Chiaia, B. Constrained reliability-based optimization of linear tuned mass dampers for seismic control. Int. J. Solids Struct. 2007, 44, 7370–7388. [Google Scholar] [CrossRef]
  23. Mishra, S.K.; Roy, B.K.; Chakraborty, S. Reliability-based-design-optimization of base isolated buildings considering stochastic system parameters subjected to random earthquakes. Int. J. Mech. Sci. 2013, 75, 123–133. [Google Scholar] [CrossRef]
  24. Huang, Z.; Zhu, W.; Suzuki, Y. Stochastic averaging of strongly non-linear oscillators under combined harmonic and white-noise excitations. J. Sound Vib. 2000, 238, 233–256. [Google Scholar] [CrossRef]
  25. Li, W.; Xu, W.; Zhao, J.; Jin, Y. First-passage problem for strong nonlinear stochastic dynamical systems. Chaos Solitons Fractals 2006, 28, 414–421. [Google Scholar] [CrossRef]
  26. Gan, C.; Zhu, W. First-passage failure of quasi-non-integrable-hamiltonian systems. Int. J. Non-Linear Mech. 2001, 36, 209–220. [Google Scholar] [CrossRef]
  27. Wang, J.; Leng, X.; Liu, X. An efficient approach to obtaining the exit location distribution and the mean first passage time based on the gcm method. Phys. A Stat. Mech. Its Appl. 2021, 572, 125837. [Google Scholar] [CrossRef]
  28. Han, Q.; Xu, W.; Yue, X.; Zhang, Y. First-passage time statistics in a bistable system subject to poisson white noise by the generalized cell mapping method. Commun. Nonlinear Sci. Numer. Simul. 2015, 23, 220–228. [Google Scholar] [CrossRef]
  29. Bucher, C.; Paola, M.D. Efficient solution of the first passage problem by path integration for normal and poissonian white noise. Probabilistic Eng. Mech. 2015, 2015, 121–128. [Google Scholar] [CrossRef]
  30. Zan, W.; Xu, Y.; Metzler, R.; Kurths, J. First-passage problem for stochastic differential equations with combined parametric gaussian and levy white noises via path integral method. J. Comput. Phys. 2021, 435, 110264. [Google Scholar] [CrossRef]
  31. Wang, X.; Jiang, J.; Hong, L.; Zhao, A.; Sun, J.-Q. Radial basis function neural networks solution for stationary probability density function of nonlinear stochastic systems. Probabilistic Eng. Mech. 2023, 71, 103408. [Google Scholar] [CrossRef]
  32. Li, W.; Guan, Y.; Huang, D.; Trisovic, N. Gaussian rbfnn method for solving fpk and bk equations in stochastic dynamical system with fopid controller. Int. J. Non-Linear Mech. 2023, 153, 104403. [Google Scholar] [CrossRef]
  33. Qian, J.; Chen, L.; Sun, J.-Q. Non-stationary stochastic response determination of vibro-impact system under combination harmonic and gaussian white noise excitations. Eng. Struct. 2024, 304, 117677. [Google Scholar] [CrossRef]
  34. Zhao, A.; Xing, S.; Wang, X.; Sun, J.-Q. Radial basis function neural networks for optimal control with model reduction and transfer learning. Eng. Appl. Artif. Intell. 2024, 136, 108899. [Google Scholar] [CrossRef]
  35. Qian, J.; Chen, L.; Sun, J.-Q. Random vibration analysis of vibro-impact systems: Rbf neural network method. Int. J. NonLinear Mech. 2023, 148, 104261. [Google Scholar] [CrossRef]
  36. Laurent, P. Optimality conditions in variational form for non-linear constrained stochastic control problems. Math. Control. Relat. Fields 2020, 10, 493–526. [Google Scholar] [CrossRef]
  37. Yang, X.; Yu, Z. Stochastic maximum principle for optimal continuous and impulse controls of infinite horizon delay system. J. Math. Anal. Appl. 2025, 542, 128796. [Google Scholar] [CrossRef]
  38. Ma, H.; Shi, Y. Stochastic maximum principle for optimal control problems with mixed delays and noisy observations. Eur. J. Control 2024, 79, 101073. [Google Scholar] [CrossRef]
  39. Cheng, J. An artificial neural network based genetic algorithm for estimating the reliability of long span suspension bridges. Finite Elem. Anal. Des. 2010, 46, 658–667. [Google Scholar] [CrossRef]
  40. Cheng, J.; Li, Q. Reliability analysis of structures using artificial neural network based genetic algorithms. Comput. Methods Appl. Mech. Eng. 2008, 197, 3742–3750. [Google Scholar] [CrossRef]
  41. Gomes, H.M.; Awruch, A.M.; Lopes, P.A.M. Reliability based optimization of laminated composite structures using genetic algorithms and artificial neural networks. Struct. Saf. 2011, 33, 186–195. [Google Scholar] [CrossRef]
  42. Nouri, A.; Lachheb, A.; Amraoui, L.E. Optimizing efficiency of vehicle-to-grid system with intelligent management and ann-pso algorithm for battery electric vehicles. Electr. Power Syst. Res. 2024, 226, 109936. [Google Scholar] [CrossRef]
  43. Mustaffa, Z.; Sulaiman, M.H. Battery remaining useful life estimation based on particle swarm optimization-neural network. Clean. Energy Systems 2024, 9, 100151. [Google Scholar] [CrossRef]
  44. Dong, Y. Application research on classification and integration model of innovation and entrepreneurship education resources based on gnn-pso algorithm. Syst. Soft Comput. 2025, 7, 200326. [Google Scholar] [CrossRef]
  45. Li, W.; Lin, M.; Zhao, J.; Kozak, D. Stochastic reliability optimization of a controlled memristor-based van der pol circuit using a new intelligent algorithm. Eng. Appl. Artif. Intell. 2025, 154, 110921. [Google Scholar] [CrossRef]
  46. Zhu, W.; Deng, M.; Deng, G. Stochastic Averaging Method and Its Applications; Science Press: Beijing, China, 2023. [Google Scholar]
  47. Xu, Z.; Cheung, Y. Averaging method using generalized harmonic functions for strongly non-linear oscillators. J. Sound Vib. 1994, 174, 563–576. [Google Scholar] [CrossRef]
  48. Huang, Z.L.; Jin, X.L. Response and stability of a SDOF strongly nonlinear stochastic system with light damping modeled by a fractional derivative. J. Sound Vib. 2009, 319, 1121–1135. [Google Scholar] [CrossRef]
  49. Zhu, W. Nonlinear Stochastic Dynamics and Control: The Hamiltonian Theoretical Framework; Science Press: Beijing, China, 2003. [Google Scholar]
Figure 1. Schematic diagram of the NSD seismic isolation structure incorporating fractional-order damping.
Figure 1. Schematic diagram of the NSD seismic isolation structure incorporating fractional-order damping.
Fractalfract 09 00504 g001
Figure 2. (a) Motion state diagram of NSD (b) Force analysis diagram of NSD.
Figure 2. (a) Motion state diagram of NSD (b) Force analysis diagram of NSD.
Fractalfract 09 00504 g002
Figure 3. Comparison of original function (solid) and Taylor approximation (dash, degrees 3/5/7) (a) Taylor 3rd−order approximation; (b) Taylor 5th−order approximation; (c) Taylor 7th−order approximation; (d) Error of Taylor 3rd−order approximation; (e) Error of Taylor 5th−order approximation; (f) Error of Taylor 7th−order approximation.
Figure 3. Comparison of original function (solid) and Taylor approximation (dash, degrees 3/5/7) (a) Taylor 3rd−order approximation; (b) Taylor 5th−order approximation; (c) Taylor 7th−order approximation; (d) Error of Taylor 3rd−order approximation; (e) Error of Taylor 5th−order approximation; (f) Error of Taylor 7th−order approximation.
Fractalfract 09 00504 g003
Figure 4. (a) Effect of spring stiffness on negative stiffness (b) effect of spring original length on negative stiffness (c) effect of spring length on negative stiffness.
Figure 4. (a) Effect of spring stiffness on negative stiffness (b) effect of spring original length on negative stiffness (c) effect of spring length on negative stiffness.
Fractalfract 09 00504 g004
Figure 5. Flowcharts of the GA-GRBFNN and PSO-GRBFNN algorithm.
Figure 5. Flowcharts of the GA-GRBFNN and PSO-GRBFNN algorithm.
Fractalfract 09 00504 g005
Figure 6. Scatter plot of the training data.
Figure 6. Scatter plot of the training data.
Fractalfract 09 00504 g006
Figure 7. Iterative vector diagram for velocity and position update. Star represents the location of the optimal value; red dots indicate the individual best position P i b t and the global best position G b t of the current particle swarm.
Figure 7. Iterative vector diagram for velocity and position update. Star represents the location of the optimal value; red dots indicate the individual best position P i b t and the global best position G b t of the current particle swarm.
Fractalfract 09 00504 g007
Figure 8. The reliability function under optimal parameters.
Figure 8. The reliability function under optimal parameters.
Fractalfract 09 00504 g008
Figure 9. (a) Reliability function under different L 0 . (b) First-passage probability function under different L 0 .
Figure 9. (a) Reliability function under different L 0 . (b) First-passage probability function under different L 0 .
Fractalfract 09 00504 g009
Figure 10. (a) Reliability function under different initial amplitude a 0 . (b) First-passage probability function under different initial amplitude a 0 .
Figure 10. (a) Reliability function under different initial amplitude a 0 . (b) First-passage probability function under different initial amplitude a 0 .
Fractalfract 09 00504 g010
Figure 11. Result for the optimal value under the same initial population size N s (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Figure 11. Result for the optimal value under the same initial population size N s (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Fractalfract 09 00504 g011
Figure 12. Result for the optimal value under the same number of iterations M s (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Figure 12. Result for the optimal value under the same number of iterations M s (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Fractalfract 09 00504 g012
Figure 13. Comparison of running time (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Figure 13. Comparison of running time (a) under PSO-GRBFNN algorithm (b) under GA-GRBFNN algorithm.
Fractalfract 09 00504 g013
Figure 14. The impact of the number of neural network nodes on reliability function.
Figure 14. The impact of the number of neural network nodes on reliability function.
Fractalfract 09 00504 g014
Table 1. Error between MCS and GRBFNN with respect to the reliability function. The numbers in red represent the minimum error and maximum error.
Table 1. Error between MCS and GRBFNN with respect to the reliability function. The numbers in red represent the minimum error and maximum error.
L 0 * = 0.66427 L 0 = 0.75 L 0 = 0.85
GRBFNNMCSErrorGRBFNNMCSErrorGRBFNNMCSError
1.00000.99980.00020.99991.00000.00010.99970.99900.0007
0.99990.99880.00110.99890.99860.00030.99570.99260.0031
0.99960.99740.00220.99630.99460.00170.98760.98400.0036
0.99900.99620.00280.99260.98700.00560.97770.97280.0049
0.99840.99440.00400.98850.98200.00650.96720.96180.0054
0.99760.99260.00500.98420.97820.00600.95660.95100.0056
0.99680.99100.00580.97980.97420.00560.94600.94220.0038
0.99600.98900.00700.97550.97020.00530.93560.93160.0040
0.99520.98760.00760.97110.96480.00630.92520.92140.0038
0.99440.98580.00860.96670.96020.00650.91500.90820.0068
0.99360.98420.00940.96240.95600.00640.90480.90080.0040
0.99280.98280.01000.95810.95380.00430.89480.89220.0026
0.99200.98040.01160.95380.95060.00320.88490.88180.0031
0.99120.97840.01280.94950.94720.00230.87500.87220.0028
0.99030.97720.01310.94520.94220.00300.86530.86360.0017
0.98950.97660.01290.94100.93860.00240.85570.85420.0015
0.98870.97480.01390.93670.93460.00210.84620.84540.0008
0.98790.97400.01390.93250.93100.00150.83690.83760.0007
0.98710.97060.01650.92830.92820.00010.82760.82920.0016
0.98630.96840.01790.92420.92340.00080.81840.82260.0042
0.98550.96680.01870.92000.91920.00080.80930.81580.0065
0.98470.96480.01990.91590.91500.00090.80030.81000.0097
0.98390.96280.02110.91170.91180.00010.79150.80120.0097
0.98310.96160.02150.90760.90960.00200.78270.79580.0131
minimum error0.0002minimum error0.0001minimum error 0.0007
Maximum error0.0215Maximum error0.0065Maximum error 0.0131
Mean error0.0107Mean error0.0107Mean error 0.0031
Global mean error 0.0060
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, M.; Li, W.; Huang, D.; Trisovic, N. Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device. Fractal Fract. 2025, 9, 504. https://doi.org/10.3390/fractalfract9080504

AMA Style

Lin M, Li W, Huang D, Trisovic N. Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device. Fractal and Fractional. 2025; 9(8):504. https://doi.org/10.3390/fractalfract9080504

Chicago/Turabian Style

Lin, Mingzhi, Wei Li, Dongmei Huang, and Natasa Trisovic. 2025. "Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device" Fractal and Fractional 9, no. 8: 504. https://doi.org/10.3390/fractalfract9080504

APA Style

Lin, M., Li, W., Huang, D., & Trisovic, N. (2025). Reliability Evaluation and Optimization of System with Fractional-Order Damping and Negative Stiffness Device. Fractal and Fractional, 9(8), 504. https://doi.org/10.3390/fractalfract9080504

Article Metrics

Back to TopTop