1. Introduction
In engineering practice, reliability analysis (RA) evaluates the failure probability of systems or structural components by comprehensively accounting for uncertainties arising from manufacturing errors, material variations, and loading conditions, in accordance with inherent failure criteria [
1,
2,
3,
4,
5,
6]. However, traditional time-independent reliability analysis [
7,
8,
9] often neglects the dynamic uncertainties that arise from material degradation or random external loads, leading to significant deviations between estimated and actual reliability values. Consequently, assessment results derived from static RA methods may fail to comprehensively capture reliability over the entire lifecycle. Therefore, TRA methods have emerged and received considerable attention in recent years [
10,
11,
12,
13].
The core value of TRA lies in its capacity to account for time-dependent uncertainties such as random loads and material degradation. Consequently, it plays a crucial role in assessing the lifecycle risks and reliability of engineering systems [
14]. TRA provides an in-depth understanding of the impacts arising from various sources of uncertainty by evaluating the probability that a system or its components will safely achieve the expected function within a specified time period. This offers engineers critical guidance for making informed design decisions and judgments under uncertainty, thereby enhancing structural reliability. Existing TRA methods can be broadly classified into three categories: (1) analytical methods, (2) sampling-based methods, and (3) surrogate model-based methods. One of the most representative analytical approaches is the outcrossing rate method, originally proposed by Rice [
15]. This method approximates the time-dependent failure probability (TDFP) by integrating the instantaneous out-crossing rate over the time interval of interest. Andrieu-Renaud et al. [
16] proposed the PHI2 method for efficiently calculating the instantaneous out-crossing rate. To improve the PHI2 method, Singh and Mourelatos [
17] focused on non-monotonic linear state equations, while the modification proposed by Sudret [
18] exhibits more stable computational characteristics. The Monte Carlo simulation (MCS) method [
19] is a typical representative of sampling-based approaches. Savage and Son [
20] revealed the growth pattern of the cumulative failure probability using a set sampling method, while Singh et al. [
21] proposed an importance sampling method to directly solve this problem. However, the requirement for a substantial number of samples renders MCS computationally expensive. To improve efficiency, researchers have introduced efficient sampling techniques such as subset sampling [
22] and importance sampling [
23]. Nevertheless, high computational costs remain unavoidable, as achieving the required accuracy necessitates a large number of evaluations of the true function. This challenge is particularly pronounced when dealing with computationally expensive and complex implicit functions [
24].
To enhance computational efficiency and reduce the number of function evaluations (NOF), surrogate models are frequently utilized in reliability analysis. Among the various methods available, the maximum entropy method has gained significant attention due to its capability to construct probability distributions with minimal bias under moment constraints. Pandey [
25] developed a maximum entropy-based method for structural reliability estimation that employs fractional moments. Zhang and Der Kiureghian [
26] proposed a structural reliability analysis approach that integrates maximum entropy with sparse grid integration. Li et al. [
27] extended the maximum entropy principle to time-dependent reliability problems by establishing a moment-based distribution reconstruction. Additionally, the local maximum entropy approximation has emerged as a potent tool for uncertainty quantification. Arroyo and Ortiz [
28] introduced the local maximum entropy approximation, providing smooth and non-negative shape functions, which were subsequently applied to reliability analysis. González et al. [
29] employed a local maximum entropy-based surrogate model for efficient reliability assessment, demonstrating superior performance in capturing local variations in the response surface. He et al. [
30] developed a reliability computation method that integrates transformation mixed integer programming with the maximum entropy principle, effectively addressing nonlinear performance functions. Deng and Pandey [
31] proposed a maximum entropy principle based on fractional probability weighted moments and introduced a new dimensionless analysis approach, utilizing the Akaike Information Criterion to determine the optimal order of fractional probability weighted moments in maximum entropy principle analysis. These entropy-based methods provide a theoretical foundation for distribution reconstruction when only statistical moments are available.
In practical engineering applications, obtaining a sufficient sample size for accurate statistical characterization is often challenging due to economic costs, time constraints, or the nature of destructive testing. Consequently, conducting reliability analysis under conditions of small sample sizes or sparse data has emerged as an important area of research. To address this challenge, several methods have been developed. Chen et al. [
32] proposed a bimodal interval process modeling approach for fault diagnosis in engineering systems, effectively characterizing the propagation of dynamic uncertainties without requiring extensive probabilistic information. Qiang et al. [
33] developed a hybrid interval model for uncertainty analysis of imprecise or conflicting information, providing a robust framework for decision-making in scenarios with data scarcity. Wang and Matthies [
34] proposed a non-probabilistic interval process model and methodology for uncertainty analysis in transient heat transfer problems, demonstrating the applicability of interval methods in time-dependent engineering issues. Bootstrap methods and resampling techniques generate pseudo-samples through resampling techniques, thereby enhancing the accuracy of reliability estimates [
35,
36,
37]. To address the high level of uncertainty associated with traditional methods, Amalnerkar et al. [
38] proposed a simulation scheme that reduces the distribution of reliability analysis based on sample size, thereby contributing to improved accuracy of the relevant statistical metrics. To extract the maximum information from limited samples, data-driven and active learning methods have been developed. Li et al. [
39] proposed a data-driven polynomial chaos expansion model suitable for scenarios with sparse samples in engineering contexts, which is applied in robust design optimization based on reliability. Zhou and Lu [
40] proposed an enhanced kriging surrogate modeling technique to address the challenges posed by sparse samples in high-dimensional problems.
Although the aforementioned small-sample reliability analysis methods have made significant contributions, they still face several challenges when applied to time-dependent reliability issues. The implementation of bootstrap and resampling methods is relatively straightforward and does not require distributional assumptions; however, their accuracy is largely dependent on the original sample, which may be adversely affected when the sample size is very small. Interval-based methods provide robust bounds for uncertainty quantification without requiring complete probability information, making them particularly suitable for scenarios involving imprecise or contradictory data. However, the resulting interval estimates may be overly conservative, potentially leading to inefficiencies. Data-driven methods, such as polynomial chaos expansion and enhanced Kriging, can extract maximum information from limited samples through active learning strategies. However, their computational cost increases significantly with the dimensionality of the problem, and they still require samples to construct surrogate models. Therefore, developing a computationally efficient TRA framework capable of robustly estimating probabilistic characteristics from small samples, without relying on large amounts of training data or conservative interval bounds, remains a key challenge.
To address the above issues, this paper proposes a TRA method suitable for sparse sample conditions. The method combines PWM, a maximum entropy quantile function, and FMSPA. Unlike traditional central moments, PWM is a linear combination of order statistics, which possesses asymptotic unbiasedness and is less sensitive to extreme values in small samples. It provides more robust moment estimates under limited-sample-size conditions. Based on the maximum entropy principle, a quantile function is constructed under the constraint of PWM, which yields the probability distribution with minimal bias and allows for the direct calculation of the first four central moments of the random variable. Building on this, FMSPA is extended to the field of TRA for the efficient evaluation of the cumulative failure probability at each time step. This method does not require the construction of complex surrogate models or rely on a large number of training samples, enabling a significant improvement in computational efficiency while maintaining accuracy. The flowchart of the process is shown in
Figure 1.
The remainder of this paper is organized as follows:
Section 2 introduces a PWM-based quantile function method for uncertainty assessment under small sample conditions;
Section 3 describes the FMSPA method;
Section 4 extends the proposed method to the framework of TRA;
Section 5 validates the effectiveness of the proposed method through three examples, while
Section 6 summarizes the research findings and outlines potential directions for future work.
2. Probability Weight Moments and Quantile Function
2.1. Probability Weight Moments
The problem of sparse samples for a random variable can be solved by PWMs. PWMs are linear functions of order statistics and can be estimated through a linear combination of ordered datasets. Let
p =
F(
x) =
P(
X ≤
x) denote the CDF of the random variable
X. The inverse function is denoted as
x =
x(
p). Greenwood et al. [
41] formally defined the PWMs of a random variable as follows:
where
i,
j,
k are real numbers, the following are two forms of PWMs.
and
For a sample with
n order statistics,
ak and
bk are the unbiased estimates of
αk and
βk, respectively.
and
The same assumption is made that a sample contains
n observations:
. The sample’s central moment can be defined as:
where
is the sample mean, and
k is the order of the central moment.
By comparing Equations (5) and (6), it can be seen that the central moments are inferred from the sample data. They are highly sensitive to the range of sample values, especially under small-sample conditions, where higher-order central-moment estimates from random sample sets are prone to bias. PWMs, on the other hand, extract sample information through linear computations. Their advantage lies in their insensitivity to boundary values of the samples, allowing higher-order PWMs to be derived more accurately. Under small-sample conditions, the probability distribution obtained with higher-order PWM constraints is more objective and reasonable.
However, the proposed framework does not compute the fourth moment directly from the sample data. Instead, it calculates PWMs, which are linear functions of order statistics and possess superior asymptotic unbiasedness and robustness even under small sample conditions. The higher-order central moments used in FMSPA are subsequently derived analytically from the maximum entropy quantile function constrained by these robust PWMs.
2.2. Moment of the Quantile Function
PWMs are the expected values of the data order statistics, which can also be understood as the moments of the quantile function of a non-negative random variable. The quantile function is the inverse function of the distribution function. In the case of non-negative random variables,
β0 is commonly used as the moment of the quantile function, which is a typical situation in calculations. According to the definition of traditional statistical moments, the definition of the
k-th moment (
k ≥ 1) is:
where
.
The quantile function is a continuous and non-negative function, as it holds that
for 0 ≤
p ≤ 1.
According to Equations (7) and (9), βk/β0 can be considered as the k-th moment of the quantile function x(p). In practical calculations, if the measurement data sample is normalized, then β0 = α0 = 1. For such a sample, βk represents the k-th moment of the quantile function.
2.3. Entropy-Based Quantile Function
The maximum entropy method provides a clear process for constructing probability distributions. By utilizing moment constraints, appropriate Lagrange multipliers can maximize the entropy function. The maximum entropy derived from numerical computation enhances the reliability of structural parameter estimates. As a non-parametric technique, the maximum entropy distribution does not require any assumptions about the structure of random variables, relying solely on statistical moments as constraints. As a moment estimator, the PWMs has convergence and asymptotic unbiasedness, making it an ideal method for solving engineering problems.
The entropy of the quantile function can be expressed as:
The constraint conditions are presented in the form of PWMs.
where
bk represents the sample estimate of the population PWM, and
N indicates the highest order of PWM that is analyzed.
It is important to note that
x(
p) in Equation (10) is not normalized; instead, the normalization condition is imposed as an external constraint in Equation (11) for
k = 0. To incorporate these constraints, the entropy function is modified as follows:
where
λk denotes an unknown Lagrangian multiplier.
Taking the derivative of Equation (12) yields the expression for the quantile function determined by Principle of maximum entropy under the constraint conditions of PWMs.
In the equation, the Lagrange multiplier can be determined by the constraint condition of Equation (11).
By transforming Equation (14), and letting the residual
rk be:
The residual represents the level of the Lagrange multiplier estimate. The smaller the residual, the more accurate the estimate. To obtain the value of the Lagrange multiplier
rk, optimization of the residual is required. Therefore, let:
where ε approaches 0 infinitely.
The solution to Equation (16) is addressed in this paper using a genetic algorithm. The uncertainty assessment of random variables can be performed through the quantile function. This method provides better uncertainty evaluation results under small sample conditions compared to traditional sample moment estimation. The quantile function is the inverse of the distribution function. Therefore, by taking the inverse of the quantile function, the probability density function (PDF) of the random variable can be obtained, i.e.,
F(
x) =
x−1(
p). Additionally, the first four central moments of the random variable can be directly determined through the quantile function.
After the first four moments of the random variable have been obtained through the quantile function, the reliability can be calculated using the FMSPA method.
When engineering practice involves limited sample data, the PWM-based quantile function method offers an effective solution. As demonstrated in the subsequent numerical examples, this method can accurately estimate the statistical moments of random variables using only a small number of samples, thereby validating its practical applicability in time-dependent reliability analysis with sparse data.
3. Saddle-Point Approximation Method
Over recent decades, the saddle-point approximation method has gained widespread adoption due to its exceptional accuracy [
42,
43,
44]. The saddle-point approximation method was originally developed by Daniels [
45]. In this paper, the FMSPA method is employed to perform reliability analysis.
The moment generating function (MGF) of
Y, denoted as
MY(
s), is defined as follows:
where E
Y denotes the expectation, and
fY(
y) is the PDF of
Y. The logarithm of the MGF is the cumulative generating function (CGF), given by:
The CGF is computed using an order expansion method, with its approximate expression given as follows [
46]:
where
is the
i-th cumulant of
Y. The relationship between the first four central moments and cumulants is shown as follows:
where
mi and
ci represent the first four origin moments and first four central moments, respectively.
The calculation formula for PDF can be expressed as:
where
denotes the second order derivative of the CGF of
Y, and
represents the saddle-point.
where
represents the first-order derivative of the CGF of
Y, and since this function has a polynomial structure, the following results can be derived:
Based on the SPA for
fY(
y), the CDF of
Y can be approximated as
where
and
represent the CDF and PDF of the standard normal distribution, respectively, the values of
w and
v can be obtained from Equations (30) and (31).
where
, depending on whether
is zero, negative, or positive, respectively.
The FMSPA method provides an effective approach for approximating the CDF of a random variable using only the first four moments. Since only the calculation of FY(0) is required, the multidimensional integrals in Equations (29)–(31) can be evaluated by setting Y to zero. In the subsequent sections, this method is applied to TRA, where FY(0) at each time instant serves as the basis for calculating the cumulative failure probability.
4. Time-Dependent Reliability Analysis
In practical engineering, structural performance degrades over time due to material deterioration, load fluctuations, or environmental corrosion. Consequently, the reliability analysis must transition from a static assessment to a time-dependent framework. This section details how the proposed FMSPA method is integrated with the PHI2 out-crossing rate method to evaluate the cumulative failure probability Pf,c (0, te) over the service life [0, te].
4.1. Cumulative Failure Probability Formulation
The time-dependent reliability analysis aims to calculate the probability that the limit state function (LSF)
g(
X,
Y(
t),
t) falls into the failure domain (
g ≤ 0) at any instant within the time interval [0,
te]. Based on the out-crossing rate theory proposed by Rice [
15], if the failure probability is small, the cumulative failure probability can be approximated by the upper bound:
where
Pf (0) denotes the initial instantaneous failure probability, and
v+(
t) denotes the instantaneous out-crossing rate at time
t.
For numerical implementation, the continuous time interval [0,
te] is discretized into
k equal time steps with a size of Δ
t, i.e.,
. Consequently, the integral in Equation (32) can be rewritten as a discrete summation:
where Δ
Pi =
v+(
ti)Δ
t represents the probability that the structure crosses from the safe domain to the failure domain within the time interval [
ti,
ti+1].
4.2. The PHI2 Method for Out-Crossing Estimation
To efficiently compute Δ
Pi the PHI2 method [
16] is adopted. This method conceptualizes the out-crossing event within [
ti,
ti+1] as a parallel system failure problem, where the system is safe at time
ti and fails at time
ti+1. Mathematically, this is expressed as:
By mapping the instantaneous performance functions
Zi =
g(·,
ti) and
Zi+1 =
g(·,
ti+1) into the standard normal space, the joint probability in Equation (34) can be calculated using the bivariate normal cumulative distribution function
:
where
β(
t) is the time-dependent reliability index at time
t, defined as
β(
t) = −Φ
−1(
Pf(
t)).
ρ (
ti,
ti+1) is the correlation coefficient between the LSFs at adjacent time steps
ti and
ti+1.
4.3. Integration of FMSPA and PHI2
The core contribution of this paper is the accurate estimation of β(t) and statistical characteristics under small sample conditions using FMSPA. Unlike traditional methods that rely on the first order reliability method, FMSPA reconstructs the probability distribution of the LSF using its first four moments derived from the PWM-based quantile function.
The specific computational procedure for the proposed method is as follows:
Step 1: Time discretization
Discretize the total time interval [0, te] into k intervals with nodes .
Step 2: Moment estimation via PWM
At each time node ti, generate a small set of samples for random variables and stochastic processes. Evaluate the LSF samples. Use the PWM-based quantile function method to estimate the first four central moments of the instantaneous performance function Z(ti).
Step 3: Reliability index calculation via FMSPA
Apply the FMSPA method to approximate the CDF value of the performance function at zero, i.e.,
. The instantaneous reliability index is then obtained by:
This step is repeated for all time nodes to obtain the trajectory of β(t).
Step 4: Correlation coefficient calculation
Calculate the Pearson correlation coefficient
ρ(
ti,
ti+1) between the limit state responses at adjacent time steps using the sample evaluations from Step 2:
Step 5: Cumulative probability aggregation
Substitute the obtained β(ti), β(ti+1), and ρ(ti, ti+1) into Equation (35) to compute the probability increment ΔPi. Finally, aggregate these increments using Equation (33) to determine the cumulative failure probability Pf,c(0, te).
By combining FMSPA with PHI2, the proposed framework effectively handles the nonlinearity of the LSF through higher-order moments while efficiently addressing the time-dependent correlation using the out-crossing rate formulation.
5. Examples Analysis
To validate the effectiveness of the proposed FMSPA method, it is compared with three representative TRA methods: MCS, PHI2, and Kriging. Three test cases are employed for verification, including two numerical examples and one engineering application case. Before presenting the numerical examples, a summary of the key parameter settings for each method is provided in
Table 1. These parameters were selected based on preliminary research and remain consistent across all examples unless otherwise specified. The Kriging surrogate model employs a Gaussian correlation function, with an initial correlation parameter of 0.1 across all dimensions. The active learning process utilizes the
U-function as the learning criterion, and convergence is declared when
Umin > 2, indicating that all candidate points are sufficiently distant from the limit state surface and exhibit high confidence. The FMSPA method employs probability-weighted moment and utilizes a genetic algorithm to solve for the Lagrange multipliers. The PHI2 method employs a 4-point Gauss-Legendre quadrature for the numerical integration of the out-crossing rate. The NOF and the relative error
are used as the efficiency and accuracy evaluation indicators, respectively, with the TDFP
calculated by the MCS method serving as the reference benchmark. To ensure statistical precision, a sufficiently large sample size is used for MCS, as detailed in each subsection. The sample sizes for other methods are also provided to facilitate fair comparison.
In the following examples, the NOF serves as a key indicator of computational efficiency. For the proposed FMSPA method, the NOF is determined by the product of the sample size estimated using PWM and the number of time discretization nodes. In contrast, for the PHI2 method, the NOF depends on the number of iterations required for convergence at each time step, with the number of iterations varying according to the nonlinearity of the limit state function. For MCS, the NOF is determined by the convergence criteria of the Monte Carlo method. For the Kriging method, the NOF represents the number of actual function evaluations required to construct and refine the surrogate model through active learning. The significant differences in NOF among various methods reflect the adaptability of each approach to specific data conditions and computational objectives.
5.1. A Mathematical Example
A classic time-dependent numerical example which contains two random variables and one stationary Gaussian stochastic process variable is used as the first example [
47].
where the variables [
X1,
X2] follow independent normal distributions,
Y(
t) is a stationary Gaussian random process, with the time variable
t defined over the interval [0, 1]. The specific characteristics of each variable are detailed in
Table 2.
In the analysis process, the original time interval [0, 1] is first discretized into 65 subintervals, resulting in a total of 66 time nodes. To clearly illustrate the accuracy differences of each method at varying failure probabilities, this study selects five different time intervals for comparative analysis. The specific results are shown in
Table 3, which presents the computational results of the MCS, PHI2 method, FMSPA, and Kriging method for each time interval. Additionally, the NOF and the computational time required for each method are provided.
The sample sizes for each method are as follows: For the MCS method, 105 samples were generated at each of the 66 time nodes, resulting in a total of 6.6 × 106 function evaluations. In contrast, for the proposed FMSPA, 4242 samples were utilized at each time node, leading to a total of 2.8 × 105 function evaluations. For the PHI2 method, the number of function evaluations depends on the number of FORM iterations required for convergence at each time step. The Kriging method initially employs 30 training samples, with additional samples added through active learning until convergence is achieved, resulting in a total of 130 function evaluations across all time nodes.
Based on the reliability results presented in
Table 3 and
Table 4, it is evident that most of the tested TRA methods exhibit significant deviations in estimating the failure probability within the time interval [0, 0.2]. However, in other intervals such as [0, 0.6], [0, 0.8], and [0, 1.0], all methods are able to assess the failure probability with relatively high accuracy, with the Kriging method standing out particularly. This observation aligns with the results shown in
Figure 2.
The CPU times are also presented in
Table 3. Although the Kriging method demonstrates high accuracy, its computational time is significantly higher than that of other methods. In contrast, the MCS method, despite requiring 6.6 × 10
6 performance function evaluations, completes the computation within 4.25 s due to its straightforward computational procedure. The NOF for the proposed FMSPA method depends on the preset sample size at each time node. In contrast, for the PHI2 method, which is an iterative optimization algorithm, the NOF is determined based on the convergence speed. In this example, the moderate nonlinearity of the limit state function results in comparable computational costs for both methods. Notably, the proposed FMSPA method exhibits superior computational efficiency, completing the entire calculation in only 5.58 s, which represents approximately a 91% reduction in computational time compared to the Kriging method. This clearly demonstrates that the proposed uncertainty assessment function possesses exceptional computational efficiency while maintaining high calculation accuracy.
Figure 3 illustrates the time-dependent reliability index curves computed by different methods, for example,
Section 5.1. It can be clearly observed from
Figure 3 that the reliability index displays a monotonic decreasing trend as time advances, which is consistent with the monotonic increasing pattern of failure probability depicted in
Figure 2. Upon comparing the computational results from various methods, it is evident that the reliability index curves obtained by the FMSPA and Kriging methods are in close agreement with the MCS reference results, whereas the PHI2 method exhibits relatively pronounced deviations during the initial time period, with its prediction accuracy progressively improving over time.
5.2. A Cantilever Tube Structure
The cantilever pipe structure used in the case study is shown in
Figure 4. This structure is subjected to multiple loads: two inclined lateral forces,
F1(
t) and
F2, are applied at the middle and end of the pipe, with angles
θ1 and
θ2 relative to the transverse vertical plane, respectively. Additionally, a torque
T(
t) is applied at the pipe’s end. In addition, an axial load
P is applied along the axial direction of the pipe, which is a small sample random variable with a sample size of
n = 10. The geometric dimensions of the structure are set as follows:
L1 = 120 mm,
L2 = 60 mm,
θ1 = 5° and
θ2 = 10°. Considering that the material of the cantilever pipe experiences corrosion degradation during its service life of [0, 10] years, the variation in its bending strength
S(
t) over time is simulated using the model proposed in [
48].
where
S0 represents the yield strength at the initial time, which is typically treated as a random variable in practical applications.
The safety performance function is defined based on the yield strength
S(
t) of the cantilever pipe at the clamped end’s circumference and the maximum von Mises stress
, with the specific form as follows:
When
exceeds the yield strength
S(
t), the cantilever pipe will fail. In this case, the von Mises stress
can be expressed as:
where
and
represent the superimposed normal stress and shear stress in the
x-axis direction, respectively;
A is the area of the end section of the cantilever tube;
F2 and
P are random variables;
d and
h represent the outer diameter and wall thickness of the end section, respectively, and all four of these variables are random;
F1(
t) and
T(
t) are stationary random processes. The detailed distribution characteristics of these parameters are summarized in
Table 5. The ten samples of load
P(N) are: 1250, 1000, 1050, 1150, 950, 980, 1020, 1100, 1200, 900.
Before evaluating the time-dependent reliability, the stochastic process is first discretized by dividing the time interval [0, 10] into 81 nodes. To thoroughly validate the performance of each method, TRA is conducted over five different time intervals, with the results presented in
Table 6.
The sample sizes for each method in this study are as follows: For the MCS method, 106 samples were collected at each of the 81 time nodes, resulting in a total of 8.1 × 107 function evaluations. In contrast, the FMSPA considers only 10 samples for the axial load P, which is a reasonable sample size given the context. A total of 170 samples were utilized at each time node, leading to 13,834 function evaluations. The number of function evaluations for the PHI2 method is determined by the number of FORM iterations required for convergence at each time step. The Kriging method employs 30 initial training samples and adds additional samples through active learning until convergence is achieved, resulting in a total of 292 function evaluations across all time nodes.
The failure probability calculation results for the five time intervals are summarized in
Table 6, while the comparison of relative errors for each method is presented in
Table 7. The NOF indicates that the PHI2 method requires an exceptionally high number of evaluations, which is attributed to the strong nonlinearity and non-monotonicity introduced by the trigonometric terms in the equations. In contrast, the FMSPA method requires only a small number of samples to estimate the PWM, achieving accurate reliability assessments with approximately 1.3 × 10
4 evaluations. This demonstrates its superior efficiency for highly nonlinear problems. The computational results indicate that most methods exhibit high accuracy across the majority of time intervals. However, in the time intervals [0, 2] and [0, 4], the accuracy of the failure probability obtained using the FMSPA method is relatively lower. Nevertheless, as illustrated in
Figure 5, FMSPA achieves the most accurate results over the entire time interval [0, 10], demonstrating that the FMSPA method can deliver accurate results across the complete time domain even when its accuracy is lower for small probability scenarios. As shown by the CPU times in
Table 6, the proposed FMSPA method achieves a significant reduction in computational cost relative to other methods.
Figure 6 illustrates the time-dependent reliability index curves computed using different methods, for example,
Section 5.2. A comparison of the results reveals that the reliability index curves obtained by the FMSPA and PHI2 methods exhibit excellent agreement with the MCS benchmark results, while the Kriging method shows noticeable deviation during the early time period; however, its prediction accuracy gradually improves as time progresses. The close agreement among all methods further validates the accuracy and effectiveness of the proposed FMSPA approach for TRA.
5.3. A Wing Structure
To validate the application of the proposed method in engineering practice, the reliability problem of a wing structure [
49] is selected as the research subject. The wing structure consists of three ribs and a skin panel, as shown in
Figure 7. The left side of the structure is subjected to a load
F, while three directional constraints are applied to the right side. The detailed variable characteristics are shown in
Table 8.
The elastic modulus of the ribs is defined as
E1, with a Poisson’s ratio of 0.35 and a density of 4.0 × 10
3 kg/m
3. Similarly, the elastic modulus of the skin is also set as
E2, with a Poisson’s ratio of 0.35 and a density of 1.8 × 10
3 kg/m
3. Additionally, the rib thickness and skin thickness are represented by
th and
s, respectively. The deformation of the wing structure during the design reference period
T = 10 years must satisfy the following requirement: under load, the vertical deformation of the wing surface
D(
t) should not exceed
D(
t) =
D0 [1 + ln(1 − 0.0002
t)], where
D0 represents the initial displacement. The time-dependent load is denoted as
F, and the maximum variable load
F(
t) is a random variable with limited sample data. The ten samples of load
F(
t) (N) are: 2850, 3200, 2900, 3100, 2750, 3150, 2950, 3300, 2800, 3000. Therefore, the LSF can be defined as:
where
Dmax(·) represents the maximum displacement of the structure, which can be calculated using the finite element method.
Prior to evaluating the time-dependent reliability of the wing structure, the time interval [0, 10] is partitioned into 100 sub-intervals, resulting in 101 time nodes. To comprehensively evaluate the performance of the aforementioned RA methods, this study conducted TRA over five distinct time intervals. The detailed results are presented in
Table 9.
The sample sizes for each method are as follows: for the MCS method, 106 samples were generated at each of the 101 time nodes, resulting in a total of 108 function evaluations. In contrast, the proposed FMSPA utilized 4950 samples at each time node, leading to a total of 5 × 105 function evaluations. For the PHI2 method, the number of function evaluations is contingent upon the number of FORM iterations required for convergence at each time step. The Kriging method employs 30 initial training samples and adds additional samples through active learning until convergence is achieved, resulting in a total of 8624 function evaluations across all time nodes.
All reliability results are presented in detail in
Table 9. The MCS approach requires the computation of true responses for 10
8 random samples across all time nodes to achieve TRA. As demonstrated in
Table 10, over the time intervals [0, 6] and [0, 8], the relative error in failure probability computed by FMSPA is significantly lower than that of other TRA methods. The various methods yield accurate TRA results during other time periods, with the error remaining within a similar range. As shown in
Table 9, these methods exhibit significant differences in computational resource requirements.
The NOFs required by the MCS and the FMSPA methods are significantly higher than those of the PHI2 and Kriging methods. Notably, the PHI2 method exhibits an exceptionally low number of function evaluations in this example. This can be attributed to the limit state function being derived from a polynomial response surface, which is inherently smooth and convex, thereby facilitating rapid convergence of the optimization algorithm. Additionally, the candidate pool of the proposed method is considerably smaller than that of MCS-based methods. CPU times are also presented in
Table 9. Although FMSPA necessitates a larger number of function evaluations, it requires the least computational time because it employs the first four moments to characterize the probability distribution of the LSF, rather than calculating the failure probability through a substantial number of samples. While Kriging requires fewer NOFs compared to other methods, the extended iterative time for model training and active learning results in its computational time being significantly greater than that of the other methods.
Figure 8 presents the time-varying failure probability curves obtained using four different methods.
Figure 9 presents the time-dependent reliability-index curves obtained from various methods, for example,
Section 5.3. Consistent with the results shown in
Figure 8, the curves derived from all methods exhibit a high degree of agreement. A comparison reveals that the curves produced by FMSPA, PHI2, and Kriging align almost perfectly with the MCS benchmark, demonstrating that each method accurately captures the temporal variation in the reliability index for this wing-structure case and thereby validating the effectiveness and reliability of FMSPA in engineering applications.
5.4. Sensitivity Analysis of Sample Size
To evaluate the robustness of the proposed FMSPA method under varying sample sizes, a sensitivity analysis was conducted using a wing structure example. The sample sizes (n) ranged from 10 to 500. For each sample size, the analysis was repeated 10 times to assess the statistical stability of the results. Key performance metrics were recorded, including the mean cumulative failure probability (Mean Pf,c), standard deviation (Std), coefficient of variation (CV), and relative error compared to the MCS reference.
The results of the sensitivity analysis are summarized in
Table 11. The mean cumulative failure probability represents the average value obtained from ten repeated calculations and serves as the estimated value of the FMSPA method. The standard deviation measures the dispersion of results across these repetitions, indicating the consistency of the method. The CV, calculated as the ratio of standard deviation to mean value, quantifies the stability of the method; lower CV values indicate more stable and repeatable results. The computed relative error evaluates the accuracy of the FMSPA in comparison to the MCS reference solution.
As shown in
Table 11, the relative error decreases monotonically with increasing sample size. When
n = 10, the relative error is approximately 4.06%, which is acceptable for preliminary engineering assessments. As the sample size increases to
n = 50, the relative error reduces to 2.25%. Further increasing the sample size to
n = 500 results in a relative error of approximately 1.87%, demonstrating consistency with the MCS reference. The CV also exhibits a decreasing trend with increasing sample size, dropping from 5.95% at
n = 10 to 0.66% at
n = 500.
It is important to note that the coefficient of variation and the relative error do not consistently decrease monotonically with increasing sample size. Instead, local fluctuations were observed at certain sample sizes. For instance, the CV increased from 1.50% at a sample size of n = 50 to 1.97% at n = 100, while the relative error rose from 1.83% at n = 100 to 2.97% at n = 200. These non-monotonic variations can be attributed to several factors: (1) The estimates of the mean and standard deviation are influenced by inherent sampling randomness due to only 10 repetitions being performed for each sample size, which may lead to localized fluctuations in the coefficient of variation and error metrics. (2) When solving for Lagrange multipliers, the random initialization and search process of the genetic algorithm may result in different runs converging to slightly varying local optima, which in turn affects the stability of the moment estimates. (3) The PWM-based moment estimation involves multiple computational steps, and even minor variations in lower-order PWM estimates may occasionally be amplified in the calculation of higher-order moments. Despite these local fluctuations, the overall trend clearly indicates that larger sample sizes lead to more accurate and stable reliability estimates, thereby validating the convergence of the proposed method.
Figure 10 presents the TDFP curves for different sample sizes, compared with the MCS method. All curves exhibit the same trend, demonstrating a monotonic increase in failure probability over time. The results indicate that the FMSPA method effectively captures the overall trend of failure probability evolution, even with relatively small sample sizes. However, noticeable discrepancies are observed at certain time points when compared to the MCS reference. As the sample size exceeds 100, the alignment of the FMSPA curves with the MCS reference gradually improves across the entire time range. For sample sizes of
n ≥ 200, the difference between FMSPA and MCS diminishes significantly, confirming the convergence behavior of the proposed method.
The sensitivity analysis indicates that the proposed FMSPA method demonstrates robust performance. Although larger sample sizes yield more accurate and stable results, the method can still provide reasonable estimates in situations where sample availability is limited. In practical engineering applications, where data collection is costly or time-consuming, it is advisable to utilize limited sample sizes, as this approach offers a favorable balance between computational accuracy and data requirements.