Next Article in Journal
FusionBullyNet: A Robust English—Arabic Cyberbullying Detection Framework Using Heterogeneous Data and Dual-Encoder Transformer Architecture with Attention Fusion
Previous Article in Journal
A Probabilistic Modeling Approach to Decision Strategies: Predicting Expected Information Search and Decision Time in Multi-Attribute Choice Tasks with Varying Numbers of Attributes and Alternatives
Previous Article in Special Issue
A Lower-Bounded Extreme Value Distribution for Flood Frequency Analysis with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Fractional Weibull Regression for Reliability Prognostics and Predictive Maintenance

by
Muath Awadalla
1,* and
Manigandan Murugesan
2,*
1
Department of Mathematics and Statistics, College of Science, King Faisal University, Al Ahsa 31982, Saudi Arabia
2
Center for Nonlinear and Complex Networks, SRM TRP Engineering College, Tiruchirapalli 621105, Tamil Nadu, India
*
Authors to whom correspondence should be addressed.
Mathematics 2026, 14(1), 169; https://doi.org/10.3390/math14010169
Submission received: 24 November 2025 / Revised: 26 December 2025 / Accepted: 30 December 2025 / Published: 1 January 2026
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics)

Abstract

This paper introduces a novel Bayesian Fractional Weibull (BFW) regression framework, which generalizes the classical Weibull accelerated failure time model using fractional calculus. The proposed methodology addresses key challenges in big data reliability engineering and predictive maintenance by incorporating a fractional order parameter that provides adaptive flexibility when classical Weibull assumptions are violated. We obtain fractional score equations using Caputo derivatives and provide theoretical consistency by proving that classical maximum likelihood estimation emerges as a special case when the fractional order approaches unity. A Bayesian implementation enables full uncertainty quantification and robust inference, particularly in reliability applications characterized by limited data and complicated failure mechanisms. Comprehensive numerical experiments demonstrate the efficacy of the framework: synthetic data validate theoretical properties under both well-specified and misspecified scenarios, while a real-world case study using the NASA C-MAPSS turbofan engine dataset—a standard benchmark in recent reliability literature—shows substantial improvements in predictive performance. The BFW model achieves a 21.7% improvement in predictive performance. This framework combines the theoretical rigor of fractional calculus with the practical advantages of Bayesian inference, directly addressing the need for interpretable and robust methods in big data reliability analytics.

1. Introduction

In the era of Industry 4.0 and big data analytics, reliability engineering has undergone a transformative shift from traditional time-based maintenance to sophisticated condition-based and predictive maintenance paradigms [1]. The proliferation of sensor technologies and Internet of Things (IoT) devices has enabled continuous monitoring of industrial assets, generating massive datasets that capture complex degradation patterns and failure mechanisms [2]. This data-rich environment presents unprecedented opportunities for improving asset reliability, reducing maintenance costs, and enhancing operational safety across critical sectors including aerospace, energy, and manufacturing [3]. However, the complexity, scale, and heterogeneity of modern reliability data also pose significant challenges for traditional statistical modeling approaches, particularly when dealing with anomalous failure behaviors, multi-stage degradation processes, and non-standard aging patterns that systematically deviate from classical assumptions [4].
Classical parametric models, with the Weibull distribution as their cornerstone, have long served as the foundation of reliability analysis due to their mathematical tractability and physical interpretability [5]. The Weibull accelerated failure time (AFT) model, characterized by its shape parameter α , has demonstrated remarkable flexibility in capturing various failure rate behaviors—from decreasing ( α < 1 ) to constant ( α = 1 ) and increasing ( α > 1 ) hazard rates [6]. Despite its enduring utility, the fixed-shape parameter assumption of the classical Weibull model often proves inadequate for capturing the complex degradation dynamics, memory effects, and heterogeneous sub-populations prevalent in modern engineering systems [7]. The limitations become particularly pronounced in big data reliability scenarios, where the sheer volume and variety of data expose subtle but significant deviations from standard parametric forms [8]. Recent research continues to refine Weibull-based approaches to address these challenges, such as through hierarchical structures for fleet learning [9] and sophisticated metaheuristic algorithms for parameter estimation in multi-parameter distributions [10].
Traditional approaches to handling model misspecification in reliability often resort to ad-hoc solutions such as mixture models or covariate-dependent shape parameters, but these can lead to overparameterization, computational inefficiency, and interpretability challenges [11]. Alternative flexible distributions, such as the generalized gamma [12], Birnbaum–Saunders [13], and exponentiated Weibull families [11], typically introduce additional parameters without the mathematical coherence offered by a unifying theoretical framework. Non-parametric and semi-parametric methods, while offering greater flexibility, often sacrifice the physical interpretability and extrapolation capability that make parametric models so valuable in engineering practice [14]. Furthermore, recent deep learning approaches have pushed the boundaries of prognostic accuracy, using techniques such as supervised contrastive learning [15], and graph feature attention networks [16] to model complex temporal dependencies. This landscape reveals a critical need for a more principled and flexible modeling framework that can naturally adapt to complex failure patterns while maintaining mathematical coherence and physical interpretability—a need particularly acute in the context of big data reliability analytics.
Fractional calculus (FC), the branch of mathematics dealing with derivatives and integrals of arbitrary (non-integer) order, has emerged as a powerful tool for generalizing classical models across various scientific domains [17]. By introducing a continuous order parameter, fractional operators naturally capture memory effects, long-range dependencies, and anomalous dynamics that integer-order models cannot adequately represent [18]. The application of FC to reliability engineering has primarily focused on modeling complex physical processes, with recent work demonstrating its effectiveness for probabilistic failure analysis of stochastically excited nonlinear structural systems with fractional derivative elements [19] and for non-stationary response determination of linear systems with fractional components [20]. Beyond engineering mechanics, our previous research has established the successful application of fractional calculus to generalize core statistical models, including linear [21], Poisson, and logistic regression frameworks. These works demonstrated that fractional derivatives provide a continuous bridge between classical and more flexible estimation paradigms, with the classical maximum likelihood estimator emerging as a special case when the fractional order approaches unity. This consistent mathematical framework has shown remarkable effectiveness in handling over-dispersion, zero-inflation, and other forms of model misspecification in various statistical contexts through a fundamentally different approach to model generalization.
Despite these promising developments, the integration of fractional calculus with Bayesian methods for statistical reliability modeling represents a significant and largely unexplored research gap. This gap is particularly notable given the complementary strengths of both approaches: fractional calculus provides the mathematical framework for flexible model generalization through a continuous order parameter, while Bayesian methods offer principled uncertainty quantification and robust inference—especially valuable in reliability applications where data may be limited, censored, or subject to multiple sources of variation [22]. While advanced Bayesian computational methods like Hamiltonian Monte Carlo are being successfully applied to complex reliability problems [23], and while Gaussian process models combined with Bayesian inference are providing non-parametric solutions for failure modeling [21], their synergy with fractional operators has not been systematically investigated. The current literature reveals several additional research opportunities: existing fractional statistical models have primarily focused on frequentist estimation frameworks, leaving the Bayesian perspective underdeveloped; the combination of FC with lifetime data analysis remains in its infancy; and the potential for fractional models to provide a continuum of solutions between classical and robust forms while maintaining interpretability represents a particularly promising direction for reliability applications.
This paper addresses these gaps by introducing a comprehensive Bayesian Fractional Weibull (BFW) regression framework that fundamentally generalizes classical Weibull regression through fractional calculus. We develop the fractional Weibull model by applying Caputo fractional derivatives to the classical log-likelihood function, deriving fractional score equations that generalize the classical score equations. We theoretically prove that the classical maximum likelihood estimator emerges as a special case of our fractional framework when the fractional order approaches unity, ensuring mathematical consistency with established methods. We implement a fully Bayesian inference procedure using Hamiltonian Monte Carlo that characterizes the joint posterior distribution of all parameters, including the fractional order, enabling complete uncertainty quantification in complex reliability scenarios. Through comprehensive numerical experiments—including synthetic data validation under both well-specified and misspecified conditions and a substantial real-world case study using the NASA C-MAPSS turbofan engine dataset—we demonstrate that our approach substantially outperforms classical Weibull regression and state-of-the-art alternatives in predictive accuracy, uncertainty quantification, and robustness to model misspecification. The real-world case study is particularly relevant to the special issue focus on “Big Data Analytics in Reliability Engineering” and “Predictive Maintenance and Prognostics,” as it involves a large-scale prognostic application with 56,000 observations and 18 engineered features, where our BFW model achieves a 21.7% improvement in mean absolute error for remaining useful life prediction compared to classical Weibull regression while maintaining well-calibrated uncertainty intervals.
The proposed BFW framework represents a significant advancement in reliability modeling by combining the theoretical rigor of fractional calculus with the practical advantages of Bayesian inference. By providing a continuous, interpretable parameter that controls model flexibility, our approach enables reliability engineers to capture complex failure patterns while maintaining physical interpretability and mathematical coherence. This is particularly valuable in the context of big data reliability analytics, where the ability to adapt to complex degradation patterns while providing well-calibrated uncertainty intervals is essential for effective predictive maintenance and risk-informed decision making. The remainder of this paper is organized as follows: Section 2 reviews the classical Weibull regression model and essential concepts from fractional calculus. Section 3 presents our main theoretical contribution—the fractional Weibull regression framework and its convergence properties and develops the Bayesian implementation and computational details. Section 4 demonstrates the methodology through comprehensive numerical experiments, and Section 5 concludes with discussion and future research directions.

2. Preliminaries

This section provides the foundational concepts and mathematical framework upon which this work is built. We first summarize the classical Weibull regression model, a cornerstone of reliability analysis. We then present the essential definitions from fractional calculus, with particular focus on the Caputo fractional derivative, which forms the basis of our generalization. Finally, we briefly state the core motivation derived from recent advancements in fractional statistical models.

2.1. Classical Weibull Regression

The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering and survival analysis due to its flexibility in modeling various failure rates [1,6]. In its parametric regression form, often referred to as the Accelerated Failure Time (AFT) model, it relates a set of covariates to the survival time.
Let T i represent the failure time for the i-th unit ( i = 1 , , n ). The Weibull AFT model assumes:
T i Weibull ( α , λ i ) ,
where α > 0 is the shape parameter and λ i > 0 is the scale parameter for the i-th unit. The probability density function (pdf) is given by
f ( t i | α , λ i ) = α λ i t i λ i α 1 exp t i λ i α , t i > 0 .
The scale parameter λ i is linked to a vector of covariates x i R p + 1 (which includes an intercept term) through a log-linear function:
λ i = exp ( x i T β ) ,
where β R p + 1 is the vector of regression coefficients.
Assuming independent and possibly right-censored observations, the likelihood function for the model is
L ( β , α | t , X ) = i = 1 n [ f ( t i | α , λ i ) ] δ i [ S ( t i | α , λ i ) ] 1 δ i ,
where δ i is the censoring indicator ( δ i = 1 for an observed failure, δ i = 0 for a censored observation) and S ( t i | α , λ i ) = exp ( t i / λ i ) α is the survival function.
The corresponding log-likelihood function is derived as follows. For a failed unit ( δ i = 1 ) , the contribution is log f ( t i ) :
log f ( t i ) = log α log λ i + ( α 1 ) ( log t i log λ i ) t i λ i α .
For a censored unit ( δ i = 0 ), the contribution is log S ( t i ) :
log S ( t i ) = t i λ i α .
Combining these contributions and using the log-linear link log λ i = x i T β , the full log-likelihood simplifies to the following compact form:
( β , α ) = i = 1 n δ i log α + ( α 1 ) ( log t i x i T β ) exp α ( log t i x i T β ) .
This expression correctly aggregates the terms for both failed and censored observations, where the exponential term exp [ α ( log t i x i T β ) ] applies universally. The parameters ( β , α ) are typically estimated by maximizing this log-likelihood function [24].

2.2. Elements of Fractional Calculus

Fractional Calculus (FC) generalizes the concepts of differentiation and integration to arbitrary (non-integer) orders. We now state the definitions essential to this work, following the established literature [17,18,25].
Definition 1
(Riemann–Liouville Fractional Integral [18]). Let f be a piecewise continuous function on ( 0 , ) and integrable on any finite subinterval of [ 0 , ) . The Riemann–Liouville fractional integral of order q > 0 of f ( t ) is defined as
( I q f ) ( t ) = 1 Γ ( q ) 0 t ( t s ) q 1 f ( s ) d s , t > 0 ,
where Γ ( · ) is the Euler gamma function.
Definition 2
(Caputo Fractional Derivative [17]). Let f A C n [ 0 , T ] , the space of functions with absolutely continuous ( n 1 ) th derivatives. The Caputo fractional derivative of order q > 0 ( n 1 < q n , n N ) of f ( t ) is defined by its integral representation:
( C D q f ) ( t ) = 1 Γ ( n q ) 0 t ( t s ) n q 1 f ( n ) ( s ) d s .
This operator can be understood as ( D q C f ) ( t ) = I n q f ( n ) ( t ) , where the fractional integral I n q is applied to the n-th derivative of f.
A key property of the Caputo derivative that makes it suitable for modeling physical and statistical phenomena is that the derivative of a constant is zero:
D q C ( constant ) = 0 .
Furthermore, for 0 < q < 1 (which will be our primary focus), the definition simplifies to:
( C D q f ) ( t ) = 1 Γ ( 1 q ) 0 t ( t s ) q f ( s ) d s .

2.3. Motivation from Fractional Generalized Linear Models

The present work is directly motivated by recent results that successfully generalized the estimation of core statistical models using fractional calculus. Specifically, ref. [26] derived fractional estimators for linear regression, while our ongoing research has extended this paradigm to Poisson and logistic regression frameworks by applying the Caputo fractional derivative to their log-likelihood functions. A pivotal result in this line of research is the rigorous proof that the classical maximum likelihood estimator constitutes a special case of the broader fractional estimator, recovered exactly when the fractional order q = 1 . This establishes a continuous bridge between classical and fractional estimation paradigms. This paper aims to extend this powerful connection to the domain of reliability analysis by formulating a fractional Weibull regression model.

3. The Fractional Weibull Regression Model

This section presents the primary theoretical contribution of this work: the generalization of Weibull regression estimation into the fractional domain. We derive the fractional analogues of the score equations by applying the Caputo fractional derivative to the log-likelihood function. The central result is that the classical maximum likelihood estimator emerges as a specific instance of a more general family of fractional estimators.

3.1. Derivation of the Fractional Score Equations

The foundation of our approach is to replace the integer-order gradient of the log-likelihood, which defines the classical score function, with its fractional-order counterpart. We define the fractional score function of order q ( 0 < q < 1 ) for the Weibull regression model as the vector of Caputo fractional derivatives of the log-likelihood with respect to the parameter vector β :
S q ( β , α ) = q ( β , α ) β q = q ( β , α ) β 0 q , q ( β , α ) β 1 q , , q ( β , α ) β p q T .
Our objective is to find the parameter estimate β ^ q that satisfies the fractional optimality condition:
S q ( β , α ) = 0 .
For the purpose of this derivation, we will initially consider the shape parameter α to be fixed and focus on the regression coefficients β . We will derive the equations for a single parameter β j .
Step 1: Express the Log-Likelihood and its First Derivative From the classical Weibull AFT model, the log-likelihood can be written in the following expanded form, which is algebraically equivalent to Equation (1) and makes the linear dependence on β explicit for differentiation:
( β , α ) = i = 1 n δ i log α + δ i ( α 1 ) log t i α δ i x i T β exp α ( log t i x i T β ) .
Let η i = x i T β . The partial derivative of the log-likelihood with respect to a specific parameter β j is:
( β , α ) β j = i = 1 n η i η i β j = i = 1 n η i x i j ,
where x i j is the j-th component of x i . Now, we compute / η i :
η i = η i α δ i η i exp [ α ( log t i η i ) ] = α δ i + α exp [ α ( log t i η i ) ] .
Recall that λ i = e η i , so exp [ α ( log t i η i ) ] = ( t i / λ i ) α . Therefore, the classical score component for β j is
g j ( β ) = ( β , α ) β j = α i = 1 n t i λ i α δ i x i j .
This score equation shows that the model is driven by the difference between the scaled cumulative hazard ( t i / λ i ) α and the failure indicator δ i .
Step 2: Apply the Caputo Fractional Derivative
We now apply the Caputo fractional derivative of order q ( 0 < q < 1 ) with respect to β j :
q ( β , α ) β j q = D β j q C ( β , α ) = I 1 q β j ( β , α ) .
Writing the Riemann–Liouville integral I 1 q explicitly using Definition 1, we obtain
S j q ( β ) = q ( β , α ) β j q = 1 Γ ( 1 q ) 0 β j ( β j s ) q s ( β j , s , α ) d s ,
where ( β j , s ) denotes the parameter vector with the j-th component replaced by the integration variable s. Inside the integral, for each unit i, we define η i ( β j , s ) = x i T ( β j , s ) .
Step 3: Substitute the First Derivative
Substituting the expression for the first derivative from Equation (3) into Equation (4) yields the fractional score equation:
S j q ( β ) = 1 Γ ( 1 q ) 0 β j ( β j s ) q α i = 1 n t i exp ( η i ( β j , s ) ) α δ i x i j d s .
Equivalently:
S j q ( β ) = α Γ ( 1 q ) i = 1 n x i j 0 β j ( β j s ) q t i α exp α η i ( β j , s ) δ i d s .
The complexity of the exponential term precludes a closed-form solution in general, but the structure enables analysis of the limiting behavior as q 1 .

3.2. The Main Theorem: Convergence to the Classical Estimator

The following theorem shows that our fractional optimality condition recovers the classical score equations as the fractional order approaches unity. This establishes that the proposed fractional generalization is mathematically consistent and contains the standard Weibull AFT estimator as a limiting special case.
Theorem 1. 
Let S q ( β , α ) = q ( β , α ) / β q denote the fractional score function for the Weibull regression model, where the derivatives are taken componentwise in the Caputo sense of order q with 0 < q < 1 . Then,
lim q 1 S q ( β , α ) = S ( β , α ) ,
where S ( β , α ) = ( β , α ) / β is the classical score vector (cf. Equation (3)).
Proof. 
We prove the claim componentwise. Fix an index j { 0 , 1 , , p } and consider the function
f ( β j ) = ( β j , β j , α ) ,
where β j denotes all components of β except β j . Under the Weibull AFT specification, is a composition of smooth functions of β (linear predictors, exponentials, and sums), hence f is continuously differentiable with respect to β j on any bounded interval. In particular, f C ( [ 0 , β j ] ) for any fixed β j > 0 .
Step 1: Start from the Caputo definition. For 0 < q < 1 , the Caputo derivative of f is given by (see, e.g., standard fractional calculus texts)
D β j q C f ( β j ) = 1 Γ ( 1 q ) 0 β j ( β j s ) q f ( s ) d s .
By definition of the fractional score in Section 3.1, we have
S j q ( β , α ) = D β j q C f ( β j ) = 1 Γ ( 1 q ) 0 β j ( β j s ) q f ( s ) d s .
Step 2: Rewrite the integral in a normalized kernel form. Perform the change of variables u = β j s (so s = β j u , d s = d u ). Then
S j q ( β , α ) = 1 Γ ( 1 q ) 0 β j u q f ( β j u ) d u .
Next normalize the interval by setting u = β j t , where t [ 0 , 1 ] and d u = β j d t :
S j q ( β , α ) = β j 1 q Γ ( 1 q ) 0 1 t q f β j ( 1 t ) d t .
At this stage, the integral is expressed as a weighted average of f with a kernel t q that becomes increasingly concentrated near t = 0 as q 1 .
Step 3: Separate the main term and a remainder. Add and subtract the value f ( β j ) inside the integral:
S j q ( β , α ) = β j 1 q Γ ( 1 q ) 0 1 t q f ( β j ) d t + β j 1 q Γ ( 1 q ) 0 1 t q f ( β j ( 1 t ) ) f ( β j ) d t .
Denote these two terms by A q ( β j ) and R q ( β j ) , respectively:
A q ( β j ) = β j 1 q Γ ( 1 q ) f ( β j ) 0 1 t q d t , R q ( β j ) = β j 1 q Γ ( 1 q ) 0 1 t q f ( β j ( 1 t ) ) f ( β j ) d t .
Step 4: Evaluate the main term A q ( β j ) . Since 0 < q < 1 ,
0 1 t q d t = t 1 q 1 q 0 1 = 1 1 q .
Therefore,
A q ( β j ) = β j 1 q Γ ( 1 q ) · 1 1 q f ( β j ) .
Using the Gamma identity Γ ( 2 q ) = ( 1 q ) Γ ( 1 q ) , we can rewrite
1 ( 1 q ) Γ ( 1 q ) = 1 Γ ( 2 q ) .
Hence,
A q ( β j ) = β j 1 q Γ ( 2 q ) f ( β j ) .
Now take the limit q 1 : since β j 1 q β j 0 = 1 and Γ ( 2 q ) Γ ( 1 ) = 1 , we obtain
lim q 1 A q ( β j ) = f ( β j ) .
Step 5: Show that the remainder R q ( β j ) vanishes. Because f is continuous on [ 0 , β j ] , it is uniformly continuous on this compact interval. Hence, for every ε > 0 , there exists δ ( 0 , 1 ) such that
| f ( β j ( 1 t ) ) f ( β j ) |   < ε whenever 0 t δ .
Split the remainder integral accordingly:
R q ( β j ) = β j 1 q Γ ( 1 q ) 0 δ t q f ( β j ( 1 t ) ) f ( β j ) d t + δ 1 t q f ( β j ( 1 t ) ) f ( β j ) d t .
For the first part, by the choice of δ ,
0 δ t q f ( β j ( 1 t ) ) f ( β j ) d t ε 0 δ t q d t = ε δ 1 q 1 q .
For the second part, since f is continuous on [ 0 , β j ] , it is bounded; let
M = sup s [ 0 , β j ] | f ( s ) | < .
Then
δ 1 t q f ( β j ( 1 t ) ) f ( β j ) d t 2 M δ 1 t q d t = 2 M 1 δ 1 q 1 q .
Combining these bounds yields
| R q ( β j ) | β j 1 q Γ ( 1 q ) ε δ 1 q 1 q + 2 M 1 δ 1 q 1 q = β j 1 q Γ ( 2 q ) ε δ 1 q + 2 M ( 1 δ 1 q ) ,
where we again used ( 1 q ) Γ ( 1 q ) = Γ ( 2 q ) . Now take q 1 : we have β j 1 q 1 , Γ ( 2 q ) 1 , and δ 1 q 1 , hence ( 1 δ 1 q ) 0 . Therefore,
lim sup q 1 | R q ( β j ) | ε .
Since ε > 0 was arbitrary, lim q 1 R q ( β j ) = 0 .
Step 6: Combine the limits. From Steps 4 and 5,
lim q 1 S j q ( β , α ) = lim q 1 A q ( β j ) + lim q 1 R q ( β j ) = f ( β j ) + 0 = ( β , α ) β j .
This is exactly the jth component of the classical score vector S ( β , α ) defined in Equation (3). Since the argument holds for every j = 0 , 1 , , p , we obtain the vector limit:
lim q 1 S q ( β , α ) = S ( β , α ) ,
which completes the proof.    □
While Theorem 1 establishes that the classical score equations are recovered as q 1 , the behavior for smaller values of q offers distinct statistical insights. As q 0 , the fractional derivative increasingly smooths the score function, leading to more robust estimates that are less sensitive to outliers or model misspecification, in a manner analogous to regularization techniques in robust statistics. For intermediate values such as q = 1 / 2 , the resulting estimator balances efficiency and robustness, adapting naturally to data exhibiting moderate deviations from the classical Weibull assumptions.

3.3. Bayesian Framework for Fractional Weibull Regression

The fractional score equations derived in Section 3 define a system of transcendental equations that generalize classical maximum likelihood estimation. To fully quantify uncertainty in all parameters—including the fractional order q—and to enable robust inference particularly valuable in reliability applications with limited data, we formulate the estimation within a Bayesian framework. This approach requires specifying a coherent probabilistic model that properly integrates the fractional calculus framework.
Model Specification via Likelihood Deformation. A mathematically coherent approach to embedding the fractional estimation within Bayesian inference is through a likelihood deformation framework. This approach is motivated by the fundamental connection between the fractional score equations and a powered likelihood.
The connection between the fractional score equation and a powered likelihood can be substantiated by considering the properties of the Caputo derivative. For a log-likelihood function ( θ ) , the Caputo fractional derivative of order q with respect to a parameter θ can be expressed as
q ( θ ) θ q = I 1 q ( θ ) θ ,
where I 1 q is the Riemann–Liouville fractional integral. While a closed-form solution for the resulting score equation is generally intractable, a key insight emerges when considering the role of the fractional integral as a smoothing operator. The operation I 1 q applied to the classical score function has a functional effect that is analogous to tempering or re-weighting the likelihood. This connection becomes exact for location-scale families and provides a mathematically coherent motivation for the powered likelihood approach in more complex models [18,25].
Specifically, the fractional score equation of order q for a parameter, derived via the Caputo derivative, induces an optimality condition that is structurally analogous to that obtained from a powered likelihood. Motivated by this functional similarity, we adopt the powered likelihood, [ L ( θ ) ] g ( q ) , as a natural Bayesian generalization of the fractional estimation framework, where the deformation function g ( q ) governs the continuous morphing between the classical and fractional paradigms.
We therefore define the fractional model by raising the classical Weibull likelihood to a power governed by the fractional order q:
p ( D β , α , q ) [ L classical ( β , α ) ] g ( q ) ,
where L classical ( β , α ) is the classical Weibull likelihood from Equation (1) and g ( q ) is a deformation function satisfying g ( 1 ) = 1 . This formulation ensures that when q = 1 , we recover the classical Bayesian Weibull model exactly, maintaining theoretical consistency with our results in Section 3.2.
The resulting fractional log-likelihood is
frac ( β , α , q ) = g ( q ) · ( β , α ) ,
where ( β , α ) is the classical Weibull log-likelihood.
The deformation function g ( q ) controls the influence of the likelihood. The choice g ( q ) = q is natural and parsimonious, as it ensures a linear deformation towards the classical likelihood ( g ( 1 ) = 1 ) and a flattened, more robust likelihood surface as q decreases. This choice is consistent with the concept of power priors in Bayesian analysis [27], where a likelihood raised to a power is used to represent discounted historical data or to achieve robust estimation. Alternative forms, such as g ( q ) = q 2 , would impose a nonlinear deformation that disproportionately suppresses the likelihood for q < 1 , which lacks a clear theoretical motivation for the fractional bridge we aim to construct. The linear form provides a direct and interpretable continuum between the classical and fractional paradigms.
This “fractional-powered” likelihood approach is theoretically justified as it produces a proper statistical model while generalizing the classical case, providing a continuum of models between the classical and more robust forms [27].

3.4. Prior Distributions

We specify independent prior distributions for all unknown parameters, reflecting prior knowledge while maintaining weak regularization where appropriate.
  • Regression Coefficients: A multivariate normal prior facilitates the incorporation of potential prior knowledge about effect sizes:
    β N p + 1 ( μ 0 , Σ 0 ) .
    For weakly informative settings, we set μ 0 = 0 and Σ 0 = σ β 2 I with σ β 2 large (e.g., 100).
  • Shape Parameter: A Gamma prior ensures positivity while allowing flexibility:
    α Gamma ( a α , b α ) ,
    with hyperparameters chosen to be weakly informative. We use a α = 2 , b α = 0.5 , which places most prior mass in the typical range for reliability applications ( α ( 0.5 , 5 ) ) while giving low probability to extreme values ( α > 10 ). This prior has mean 4 and variance 8, providing mild regularization without imposing strong subjective beliefs, and is particularly suitable for the Weibull distribution where α controls the hazard shape (decreasing for α < 1 , constant for α = 1 , increasing   for α > 1 ).
  • Fractional Order: We specify a prior that expresses the belief that the classical model ( q = 1 ) is a reasonable starting point. A scaled Beta distribution on the interval [ q min , q max ] = [ 0.7 , 1.3 ] provides an appropriate flexible prior:
    q q min q max q min Beta ( a q , b q ) ,
    where setting a q = b q centers the prior at q = 1 . We use a q = b q = 5 , which yields a prior that is concentrated around q = 1 but still allows sufficient flexibility to adapt to data that deviate from the classical Weibull assumptions.
The joint prior distribution is thus
p ( β , α , q ) = p ( β ) · p ( α ) · p ( q ) .

3.5. Posterior Inference and Computational Implementation

We employ Hamiltonian Monte Carlo (HMC) [28] as implemented by Stan [29] to sample from the posterior distribution. This state-of-the-art sampling algorithm has been successfully applied to complex reliability models in recent literature [2], making it particularly suitable for our fractional framework. HMC is especially effective for this model because it efficiently handles the high-dimensional, potentially correlated parameter space using gradient information, and the deformation approach maintains differentiable log-posteriors required for efficient HMC sampling.
The log-posterior used for HMC sampling is derived from Bayes’ theorem:
log p ( β , α , q D ) = g ( q ) · ( β , α ) + log p ( β ) + log p ( α ) + log p ( q ) + constant ,
where ( β , α ) is the classical Weibull log-likelihood from Equation (1).

3.6. Model Comparison and Predictive Performance

We compare the fractional Weibull model against the classical model ( q = 1 ) using Bayesian model comparison techniques:
  • Widely Applicable Information Criterion (WAIC) [30] provides a fully Bayesian measure of predictive accuracy that approximates out-of-sample deviance, computed as
    WAIC = 2 ( lppd p ^ WAIC ) .
  • Leave-One-Out Cross-Validation (LOO-CV) [31] offers another robust estimate of out-of-sample prediction error by approximating the expected log pointwise predictive density.
Both criteria automatically account for the flexibility introduced by the fractional order q through their fully Bayesian formulation, providing principled guidance for model selection while properly penalizing model complexity.

3.7. Posterior Predictive Checks for Reliability Assessment

A key advantage of the Bayesian framework is the ability to generate the posterior predictive distribution for failure times:
p ( T rep D ) = p ( T rep β , α , q ) p ( β , α , q D ) d β d α d q .
We employ posterior predictive checks [32] to assess model adequacy by comparing replicated failure time distributions T rep to the observed data. This typically involves generating multiple replicated datasets from the posterior predictive distribution and comparing summary statistics (e.g., quantiles, mean failure times) or visual patterns with those of the observed data. This is particularly valuable in reliability applications for validating the model’s ability to capture the failure process characteristics, such as hazard shapes and censoring patterns.

3.8. Algorithm Implementation

The complete estimation procedure is summarized in Algorithm 1.
Algorithm 1 Bayesian Estimation for Fractional Weibull Regression
Require: Dataset D ; deformation function g ( q ) ; priors p ( β ) , p ( α ) , p ( q ) ; number of chains C; total iterations S
Ensure: Posterior samples { β ( s ) , α ( s ) , q ( s ) } s = 1 S
1:
Specify the log-posterior according to Equation (11)
2:
Initialize the HMC sampler with maximum likelihood estimates or random starting values
3:
Run a warm-up phase (typically S / 2 iterations) to adapt step sizes and the mass matrix
4:
Draw samples from the posterior using HMC with NUTS [33] for S iterations
5:
Assess convergence using R ^ < 1.01  [30] and effective sample size > 100 per chain
6:
Compute posterior summaries, perform posterior predictive checks, and evaluate model comparison criteria.
This Bayesian framework provides a principled approach to estimating the fractional Weibull model, fully quantifying parameter uncertainty while maintaining theoretical coherence with the fractional calculus foundation established in Section 3.

3.9. Computational Implementation

The Bayesian inference for all models was implemented in Stan 2.33.0 [29] accessed through the R interface (version 4.3.1) with the rstan package. We employed the No-U-Turn Sampler (NUTS) [34], the default Hamiltonian Monte Carlo sampler in Stan. For each model, we ran 4 independent Markov chains with different random initializations. Each chain consisted of 2000 iterations, with the first 1000 iterations discarded as warm-up, yielding 4000 posterior samples for inference.
Convergence was assessed using the potential scale reduction factor R ^ [33], with all parameters satisfying R ^ < 1.01 , and ensuring the effective sample size (ESS) exceeded 400 per chain for all parameters, indicating sufficient independent samples from the posterior. All computations were performed on a Linux workstation with an Intel Xeon E5-2680 v4 processor (2.4 GHz) and 64 GB of RAM. The average computation time for the BFW model on the synthetic data ( n = 800 ) was approximately 45 min, while the turbofan engine case study (n = 56,000) required approximately 3.2 h.

4. Numerical Experiments

This section presents a comprehensive empirical evaluation of the proposed Bayesian Fractional Weibull (BFW) regression framework. The experiments are designed with two primary objectives: first, to validate the theoretical properties and estimation efficacy of the model under controlled conditions using synthetic data, and second, to demonstrate its practical utility and superior performance on a large-scale, real-world reliability engineering benchmark. Through rigorous comparison against established classical and flexible alternatives, we assess the model’s capacity to enhance prognostic accuracy and uncertainty quantification, addressing core challenges in modern reliability analysis with big data.

4.1. Synthetic Data Experiments

This section validates our Bayesian Fractional Weibull (BFW) framework through controlled experiments where the ground truth is known. We demonstrate two key properties: (1) theoretical consistency by recovering the classical model when appropriate, and (2) practical utility by adapting to data-generating processes that violate classical Weibull assumptions.

4.2. Experimental Design and Data Generation

We design two distinct scenarios to probe different aspects of our methodology:
  • Scenario A (Well-Specified Classical DGP): Data are generated from a standard Weibull AFT model with true parameters β true = [ 1.0 , 0.5 , 0.8 ] T and α true = 1.8 . Covariates x i are sampled from N ( 0 , 1 ) , and failure times follow T i Weibull ( α true , exp ( x i T β true ) ) . We introduce 30% random right-censoring with sample size n = 800 by generating censoring times C i U ( 0 , c max ) , where c max is chosen to achieve the target censoring rate. The observed time is then t i = min ( T i , C i ) with censoring indicator δ i = I ( T i C i ) . Here, the classical model is correct, and we expect the BFW model to identify q 1 .
  • Scenario B (Misspecified DGP): To evaluate robustness, we generate data from a mixture of two Weibull distributions. The majority population (80% of units) follows the same DGP as Scenario A, while a contaminated sub-population (20% of units) has a different shape parameter α contam = 0.9 , creating heavier tails and early-life failures. For each unit i, we first draw Z i Bernoulli ( 0.2 ) to determine sub-population membership. The failure time is then generated as
T i Weibull ( α true , exp ( x i T β true ) ) if Z i = 0 Weibull ( α contam , exp ( x i T β true ) ) if Z i = 1 .
The same censoring mechanism as Scenario A is applied, maintaining 30% random right-censoring with n = 800 . The fractional framework is expected to adapt to this misspecification by estimating q 1 , as the fractional derivative introduces additional flexibility in modeling hazard function dynamics beyond the fixed form of the classical Weibull hazard.

4.2.1. Estimation Setup and Competing Models

We evaluate our BFW model with linear deformation g ( q ) = q against three benchmark models:
  • Bayesian Fractional Weibull (BFW): Our proposed model with q Beta ( 5 , 5 ; 0.7 , 1.3 ) (Scaled Beta centered at 1);
  • Classical Bayesian Weibull (CBW): Standard Weibull AFT model with q 1 ;
  • Bayesian Log-Normal (BLN): Alternative parametric survival model;
  • Bayesian Generalized Gamma (BGG): A flexible three-parameter distribution that generalizes both Weibull and log-normal models.
All models use consistent weakly informative priors: β N ( 0 , 10 I ) , and for Weibull-based models, α Gamma ( 2 , 0.5 ) . Estimation uses Stan with 4 chains of 2000 iterations each, with convergence assessed via R ^ < 1.01 .

4.2.2. Evaluation Metrics

We employ multiple criteria to assess model performance:
  • Parameter Recovery: For Scenario A, we examine whether posterior credible intervals contain true parameter values.
  • Fractional Order Estimation: We analyze the posterior distribution of q to verify correct model identification.
  • Predictive Performance: We compute WAIC [30] and LOO-CV [31] scores.
  • Uncertainty Quantification: We assess coverage of 95% posterior predictive intervals.

4.2.3. Summary of Results

The primary results from the synthetic experiments are summarized in Table 1 (Predictive Performance Metrics) and Figure 1 (Posterior Distributions of the fractional order q).

4.2.4. Results and Interpretation

Scenario A Results (Theoretical Validation): As shown in Table 1, the BFW model correctly identifies that no fractional adjustment is needed, with posterior distribution for q concentrating around 1. Table 2 shows that BFW recovers all true parameters with high accuracy, and Figure 1 demonstrates the posterior concentration at q = 1.02 (95% CI: [0.94, 1.10]). The posterior distributions of the fractional order parameter q for both scenarios are visualized in Figure 1 and Figure 2. Figure 1 demonstrates that in Scenario A (well-specified classical model), the posterior distribution of q concentrates around 1.02 with 95% credible interval [0.94, 1.10], correctly identifying that no fractional adjustment is needed and validating Theorem 1. Conversely, Figure 2 shows that in Scenario B (misspecified model), the posterior distribution shifts leftward to q = 0.86 with 95% CI [0.79, 0.93], clearly indicating the model’s adaptation to distributional misspecification through fractional order adjustment. These visualizations provide intuitive evidence of the BFW framework’s ability to automatically determine when fractional flexibility is beneficial.
The CBW model shows slightly better predictive performance (WAIC: 1521.8 vs. 1523.4), which is expected given its parsimony when the model is correctly specified. Both Weibull-based models outperform the log-normal alternative, and all models show proper coverage of 95% posterior predictive intervals.
Scenario B Results (Practical Utility): The BFW model successfully adapts to distributional misspecification, estimating q = 0.86 (95% CI: [0.79, 0.93]) as shown in Figure 2. This translates to superior predictive performance in Table 1, with BFW achieving significantly lower WAIC (1689.2) and LOO-CV (1690.5) values compared to all benchmarks. The improvement over CBW (WAIC: 1723.6) is substantial, demonstrating the fractional model’s ability to handle violations of Weibull assumptions. Notably, BFW also outperforms the flexible BGG model (WAIC: 1702.7), suggesting that the fractional approach provides a different and potentially more effective form of flexibility. The BFW model maintains proper predictive interval coverage (0.941), while CBW shows under-coverage (0.892) due to model misspecification.

4.2.5. Sensitivity to Deformation Function

To validate our choice of the linear deformation function g ( q ) = q , we conducted a sensitivity analysis, comparing it against a nonlinear alternative, g ( q ) = q , under the misspecified Scenario B. The results, summarized in Table 3, demonstrate that while both deformation functions enable the BFW model to significantly outperform the classical Weibull model, the proposed linear form yields marginally better predictive performance (WAIC: 1689.2 vs. 1691.5; LOO-CV: 1690.5 vs. 1692.8) and produces a more stable posterior distribution for the fractional order q with a tighter credible interval [0.79, 0.93] compared to [0.72, 0.90]. These findings empirically support our selection of g ( q ) = q as it provides an optimal balance of robustness, predictive accuracy, and computational stability.

4.2.6. Discussion

These synthetic experiments serve multiple important purposes. Scenario A provides empirical validation of Theorem 1, demonstrating that our fractional framework contains the classical estimator as a special case. The results show that when the classical model is appropriate, BFW correctly identifies this and provides nearly identical inference. Scenario B demonstrates that the fractional order q acts as a meaningful flexibility parameter that can adapt to realistic model misspecifications. The superior performance of BFW over both classical and generalized gamma models in the misspecified scenario highlights the unique value of the fractional calculus approach. The results establish that our Bayesian implementation provides proper uncertainty quantification and reasonable computational performance. This controlled validation provides a solid foundation for interpreting the real-world applications that follow.

4.3. Real-World Case Study: Turbofan Engine Prognostics

Dataset and Preprocessing

To demonstrate the practical utility of our Bayesian Fractional Weibull (BFW) framework in a genuine big-data reliability context, we employ the NASA C-MAPSS (Commercial Modular Aero-Propulsion System Simulation) dataset [35], a widely recognized benchmark in prognostics and health management. This dataset perfectly aligns with the special issue’s focus on “Big Data Analytics in Reliability Engineering” and “Predictive Maintenance and Prognostics.”
We utilize the FD004 sub-dataset, which is the most challenging scenario, featuring
  • Operating Conditions: Six different operational settings;
  • Fault Modes: Two simultaneous fault modes (fan degradation and compressor degradation);
  • Scale: 249 engines for training and 248 for testing;
  • Complexity: 26 sensor measurements recorded across multiple operational cycles.
The dataset structure represents a classic condition-based maintenance scenario where the objective is to predict the Remaining Useful Life (RUL) of each engine based on sensor degradation signals. The RUL is defined as the number of operational cycles until engine failure, with failure thresholds determined by domain experts [36].
Preprocessing Pipeline: We implement a comprehensive feature engineering pipeline following established practices in aerospace prognostics [37]:
  • Sensor Selection: We identify and retain 14 relevant sensors out of the original 26 based on variance analysis (excluding sensors with variance <0.001 across the training set) and engineering significance. The selected sensors include core temperature, pressure ratios, and rotational speeds that are physically meaningful for degradation tracking.
  • Health Indicator Construction: For each engine, we construct a unified health indicator using the first principal component (PC1) from a Principal Component Analysis (PCA) performed on the 14 selected, normalized sensor readings. PC1 typically captures the dominant degradation trend, explaining approximately 65% of the total variance in the sensor data.
  • Feature Engineering: We extract rolling window statistics (mean, variance, and linear trend) across operational cycles using a window size of 10 cycles to capture temporal degradation patterns. This generates 3 temporal features for each sensor and the health indicator, resulting in 15 temporal features.
  • Operational Regime Clustering: We cluster the six operational conditions into 3 distinct operational regimes using k-means clustering ( k = 3 ) with Euclidean distance, based on the standardized operational setting values. This accounts for different stress regimes experienced by the engines during operation.
The final processed dataset contains 56,000 observations with 18 engineered features (15 temporal features + 1 health indicator + 2 operational regime indicators), representing a substantial big-data reliability analysis challenge. All features were standardized to zero mean and unit variance before model training.

4.4. Competing Models and Experimental Setup

We compare our BFW model against several state-of-the-art prognostic approaches:
  • Bayesian Fractional Weibull (BFW): Our proposed model with linear deformation g ( q ) = q and q Beta ( 5 , 5 ; 0.7 , 1.3 ) ;
  • Classical Bayesian Weibull (CBW): Standard Bayesian Weibull AFT model serving as the baseline;
  • Bayesian Generalized Gamma (BGG): Flexible three-parameter lifetime distribution;
  • Deep Weibull Network (DWN): Neural network with Weibull output layer [38];
  • Convolutional LSTM (CLSTM): State-of-the-art deep learning approach for sequence prognostics [39].
All Bayesian models use consistent weakly informative priors and are implemented in Stan with 4 chains of 2000 iterations. The deep learning models are implemented in TensorFlow with early stopping and hyperparameter optimization.
Training Strategy: We employ a rolling-origin evaluation scheme [40] where models are trained on increasing historical windows and tested on subsequent cycles, simulating real-world deployment conditions.

Evaluation Metrics

We employ comprehensive evaluation metrics specifically designed for prognostic performance assessment [35]:
  • Mean Absolute Error (MAE): MAE = 1 n i = 1 n | RUL true ( i ) RUL pred ( i ) |
  • Root Mean Square Error (RMSE): RMSE = 1 n i = 1 n ( RUL true ( i ) RUL pred ( i ) ) 2
  • Prognostic Horizon (PH): Number of cycles where RUL predictions remain within ± 10 cycles of true RUL.
  • α - λ Accuracy: Probability that predictions lie within λ = 20 % of true RUL at α = 0.1 confidence level.
  • Uncertainty Calibration: Assessment of whether 95% predictive intervals achieve nominal coverage.

4.5. Results and Analysis

Predictive Performance: As shown in Table 3, our BFW model achieves superior performance across all metrics. The MAE of 12.3 cycles represents a 21.7% improvement over the classical CBW model (MAE: 15.7) and also outperforms other state-of-the-art prognostic models, including deep learning approaches. The RMSE of 16.8 cycles further demonstrates the robustness of BFW predictions.
Prognostic Horizon and Accuracy: The BFW model achieves the longest prognostic horizon (28 cycles) and highest α - λ accuracy (0.89), indicating its predictions remain accurate further in advance of actual failures. This early and accurate prediction capability is crucial for effective maintenance planning.
Uncertainty Quantification: A key advantage of our Bayesian approach is demonstrated in the coverage metric. The BFW model achieves near-nominal coverage (0.94) for 95% predictive intervals, significantly outperforming the deep learning approaches (DWN: 0.87, CLSTM: 0.85). This reliable uncertainty quantification is essential for risk-informed decision making in safety-critical applications.
Fractional Order Adaptation: The posterior distribution of the fractional order q concentrates at 0.88 (95% CI: [0.82, 0.94]), indicating that the complex degradation patterns in the turbofan data benefit from the additional flexibility provided by the fractional framework. This represents a significant deviation from the classical Weibull model ( q = 1 ) and explains the performance improvements.
Computational Efficiency: While BFW requires more computation than CBW (3.2 h vs. 2.8 h), it is substantially more efficient than deep learning approaches (DWN: 8.7 h, CLSTM: 12.4 h). This makes BFW suitable for practical deployment where both accuracy and computational constraints must be considered.

Engineering Implications and Discussion

The superior performance of BFW has significant implications for reliability engineering practice:
Maintenance Optimization: The improved prognostic horizon enables more effective condition-based maintenance scheduling. Maintenance can be planned with greater confidence 28 cycles in advance, compared to 22 cycles with classical approaches.
Risk Management: The well-calibrated uncertainty quantification supports better risk assessment and resource allocation decisions. Operators can make informed choices about spare part inventory and maintenance crew scheduling.
Model Interpretability: Unlike black-box deep learning approaches, BFW maintains interpretability through the fractional order parameter q, which provides insight into the nature of the degradation process. The estimated q < 1 suggests that the turbofan degradation exhibits memory effects or long-range dependencies that are naturally captured by fractional calculus.
Scalability: The demonstrated performance on this substantial dataset (56,000 observations, 18 features) confirms that BFW scales effectively to big-data reliability scenarios, addressing a key requirement of modern industrial applications.
This case study establishes that the Bayesian Fractional Weibull framework provides a principled, interpretable, and computationally efficient approach for prognostic applications, combining the theoretical rigor of fractional calculus with the practical advantages of Bayesian inference for real-world reliability engineering challenges.

5. Conclusions

This paper has introduced a comprehensive Bayesian Fractional Weibull (BFW) regression framework that fundamentally bridges fractional calculus with reliability modeling, addressing key limitations of traditional Weibull regression in handling complex modern failure data. Our work makes three primary contributions. Theoretically, we derived fractional score equations for Weibull regression via Caputo fractional derivatives, establishing a continuous generalization of classical maximum likelihood estimation. Theorem 1 rigorously proves that the classical estimator emerges as a special case when the fractional order q 1 , ensuring mathematical consistency with established methods. Methodologically, we developed a fully Bayesian implementation using Hamiltonian Monte Carlo that enables complete uncertainty quantification—including for the fractional order q—while maintaining the interpretability of the fractional parameter. The likelihood deformation approach ( frac = g ( q ) · ( β , α ) ) provides a principled bridge between classical and fractional paradigms. Empirically, extensive numerical experiments demonstrate the framework’s efficacy across both synthetic and real-world scenarios.
The synthetic experiments show that BFW correctly identifies well-specified scenarios ( q 1 ) and robustly adapts to misspecification ( q = 0.86 in Scenario B), outperforming classical Weibull and flexible alternatives like the Generalized Gamma model in predictive accuracy and uncertainty calibration. The real-world application to the NASA C-MAPSS turbofan engine dataset (56,000 observations, 18 features) demonstrates substantial practical utility: BFW achieves a 21.7% improvement in mean absolute error for remaining useful life prediction compared to classical Weibull regression, with superior prognostic horizon (28 cycles), α - λ accuracy (0.89), and well-calibrated predictive intervals.
The BFW framework is particularly recommended in reliability scenarios involving heterogeneous populations, complex degradation dynamics with memory effects, uncertainty-sensitive decision-making requiring well-calibrated predictive intervals, and big-data reliability analytics where classical parametric assumptions may be systematically violated. This guidance addresses practical implementation considerations for reliability engineers working with modern condition monitoring data.
Several promising avenues for extension and refinement emerge from this work. Future theoretical extensions could investigate optimal deformation functions g ( q ) and extend the fractional framework to other lifetime distributions such as Gamma, log-Normal, and inverse Gaussian families. Methodological hybridization with semi-parametric methods could develop interpretable yet flexible hybrid models, building upon recent work in non-parametric failure modeling [21]. Computational advancements could develop more efficient sampling algorithms tailored to the fractional posterior landscape for ultra-high-dimensional reliability datasets. Applications to system reliability with dependent components and multi-state degradation processes [41] would benefit naturally from the fractional framework’s intrinsic capability to capture intricate dependency patterns. Finally, theoretical investigations into the frequentist properties of Bayesian fractional estimators and their large-sample behavior would strengthen the methodological foundations and facilitate wider adoption across both academic and industrial reliability practice.
The BFW framework represents a significant advancement in reliability modeling, combining the theoretical rigor of fractional calculus with the practical advantages of Bayesian inference. By providing a continuous, interpretable parameter that controls model flexibility, our approach enables reliability engineers to capture complex failure patterns while maintaining physical interpretability and mathematical coherence—addressing a critical need in the era of big-data reliability analytics and predictive maintenance.

Author Contributions

Methodology: M.A. and M.M.; Software: M.A. and M.M.; Investigation: M.A. and M.M.; Writing—original draft preparation: M.A. and M.M.; Writing—review and editing: M.A. and M.M.; Funding acquisition: M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. KFU254847]. M.Manigandan would like to thank SRM TRP Engineering College, India, for their financial support, grant number SRM/TRP/RI/005.

Informed Consent Statement

Not applicable.

Data Availability Statement

The NASA C-MAPSS turbofan engine dataset used in the real-world case study is publicly available from the NASA Prognostics Data Repository at https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/ (accessed on 20 November 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lei, Y.; Li, N.; Guo, L.; Li, N.; Yan, T.; Lin, J. Machinery health prognostics: A systematic review from data acquisition to RUL prediction. Mech. Syst. Signal Process. 2018, 104, 799–834. [Google Scholar] [CrossRef]
  2. Si, X.S.; Wang, W.; Hu, C.H.; Zhou, D.H. Remaining useful life estimation—A review on the statistical data driven approaches. Eur. J. Oper. Res. 2011, 213, 1–14. [Google Scholar] [CrossRef]
  3. Lee, J.; Wu, F.; Zhao, W.; Ghaffari, M.; Liao, L.; Siegel, D. Prognostics and health management design for rotary machinery systems—Reviews, methodology and applications. Mech. Syst. Signal Process. 2014, 42, 314–334. [Google Scholar] [CrossRef]
  4. Wang, B.; Lei, Y.; Li, N.; Li, N. A hybrid prognostics approach for estimating remaining useful life of rolling element bearings. IEEE Trans. Reliab. 2020, 69, 401–412. [Google Scholar] [CrossRef]
  5. Weibull, W. A statistical distribution function of wide applicability. J. Appl. Mech. 1951, 18, 293–297. [Google Scholar] [CrossRef]
  6. Meeker, W.Q.; Escobar, L.A.; Pascual, F.G. Statistical Methods for Reliability Data, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  7. Meng, H.; Li, Y.F. A review on prognostics and health management (PHM) methods of lithium-ion batteries. Renew. Sustain. Energy Rev. 2019, 116, 109405. [Google Scholar] [CrossRef]
  8. Liu, K.; Chehade, A.; Song, C. Optimize the signal quality of the composite health index via data fusion for degradation modeling and prognostic analysis. IEEE Trans. Autom. Sci. Eng. 2021, 14, 1504–1514. [Google Scholar] [CrossRef]
  9. Dhada, M.; Bull, L.; Girolami, M.; Parlikad, A. Collaborative prognosis using a Weibull statistical hierarchical model. Reliab. Eng. Syst. Saf. 2025, 262, 111110. [Google Scholar] [CrossRef]
  10. Wang, Z.; Kong, X.; Guo, J.; Zhao, B.; Xie, L.; Wu, N. An improved adaptive gradient-based optimization algorithm for estimating the parameters of three-parameter Weibull distribution: An application of aero-engine reliability assessment. Reliab. Eng. Syst. Saf. 2025, 265, 111610. [Google Scholar] [CrossRef]
  11. Mudholkar, G.S.; Srivastava, D.K.; Freimer, M. The exponentiated Weibull family: A reanalysis of the bus-motor-failure data. Technometrics 1995, 35, 436–445. [Google Scholar] [CrossRef]
  12. Prentice, R.L. A log gamma model and its maximum likelihood estimation. Biometrika 1974, 61, 539–544. [Google Scholar] [CrossRef]
  13. Birnbaum, Z.W.; Saunders, S.C. A new family of life distributions. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef]
  14. Kalbfleisch, J.D.; Prentice, R.L. The Statistical Analysis of Failure Time Data; John Wiley & Sons: Hoboken, NJ, USA, 2002. [Google Scholar]
  15. Fu, E.; Hu, Y.; Peng, K.; Chu, Y. Supervised contrastive learning based dual-mixer model for remaining useful life prediction. Reliab. Eng. Syst. Saf. 2024, 251, 110398. [Google Scholar] [CrossRef]
  16. Wang, Y.; Peng, S.; Wang, H.; Zhang, M.; Cao, H.; Ma, L. Remaining useful life prediction based on graph feature attention networks with missing multi-sensor features. Reliab. Eng. Syst. Saf. 2025, 258, 110902. [Google Scholar] [CrossRef]
  17. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  18. Podlubny, I. Fractional Differential Equations: An introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 1998; Volume 198. [Google Scholar]
  19. Behrendt, M.; Fragkoulis, V.C.; Pasparakis, G.D.; Beer, M. Probabilistic failure analysis of stochastically excited nonlinear structural systems with fractional derivative elements. Reliab. Eng. Syst. Saf. 2025, 266, 111647. [Google Scholar] [CrossRef]
  20. Luo, Y.; Spanos, P.D. Non-stationary response determination of linear systems/structures with fractional derivative elements. Reliab. Eng. Syst. Saf. 2025, 261, 111056. [Google Scholar] [CrossRef]
  21. Bahootoroody, A.; Abaei, M.M.; Zio, E.; Goerlandt, F.; Chaal, M. Gaussian process latent variable model and Bayesian inference for non-parametric failure modeling applied to ship engine. Reliab. Eng. Syst. Saf. 2025, 265, 111611. [Google Scholar] [CrossRef]
  22. Hamada, M.S.; Wilson, A.G.; Reese, C.S.; Martz, H.F. Bayesian Reliability; Springer Science & Business Media: New York, NY, USA, 2008. [Google Scholar]
  23. Abba, B.; Wu, J.; Muhammad, M. A robust multi-risk model and its reliability relevance: A Bayes study with Hamiltonian Monte Carlo methodology. Reliab. Eng. Syst. Saf. 2024, 250, 110310. [Google Scholar] [CrossRef]
  24. Klein, J.P.; Moeschberger, M.L. Multivariate Survival Analysis. In Survival Analysis: Techniques for Censored and Truncated Data; Springer: New York, NY, USA, 2006; pp. 425–441. [Google Scholar]
  25. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: London, UK, 1993. [Google Scholar]
  26. Awadalla, M.; Noupoue, Y.Y.Y.; Tandogdu, Y.; Abuasbeh, K. Regression coefficient derivation via fractional calculus framework. J. Math. 2022, 2022, 1144296. [Google Scholar] [CrossRef]
  27. Li, K.; Datta, S. Power prior and its use in clinical trials. In Advances in Statistical Methods for the Health Sciences; Birkhäuser Boston: Boston, MA, USA, 2006; pp. 377–390. [Google Scholar]
  28. Neal, R.M. MCMC Using Hamiltonian Dynamics. In Handbook of Markov Chain Monte Carlo; Brooks, S., Gelman, A., Jones, G., Meng, X.-L., Eds.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011; Volume 2, p. 2. [Google Scholar]
  29. Carpenter, B.; Gelman, A.; Hoffman, M.D.; Lee, D.; Goodrich, B.; Betancourt, M.; Brubaker, M.; Guo, J.; Li, P.; Riddell, A. Stan: A probabilistic programming language. J. Stat. Softw. 2017, 76, 1–32. [Google Scholar] [CrossRef] [PubMed]
  30. Watanabe, S. A widely applicable Bayesian information criterion. J. Mach. Learn. Res. 2013, 14, 867–897. [Google Scholar]
  31. Vehtari, A.; Gelman, A.; Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. 2017, 27, 1413–1432. [Google Scholar] [CrossRef]
  32. Gelman, A.; Meng, X.L.; Stern, H. Posterior predictive assessment of model fitness via realized discrepancies. Stat. Sin. 1996, 6, 733–760. [Google Scholar]
  33. Gelman, A.; Rubin, D.B. Inference from iterative simulation using multiple sequences. Stat. Sci. 1992, 7, 457–472. [Google Scholar] [CrossRef]
  34. Hoffman, M.D.; Gelman, A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 2014, 15, 1593–1623. [Google Scholar]
  35. Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–9. [Google Scholar] [CrossRef]
  36. Heimes, F.O. Recurrent neural networks for remaining useful life estimation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. [Google Scholar] [CrossRef]
  37. Ramasso, E.; Saxena, A. Investigating computational geometry for failure prognostics. Int. J. Progn. Health Manag. 2014, 5, 005. [Google Scholar] [CrossRef]
  38. Martin, K.F. A review by discussion of condition monitoring and fault diagnosis in machine tools. Int. J. Mach. Tools Manuf. 2019, 144, 103–127. [Google Scholar] [CrossRef]
  39. Babu, G.S.; Zhao, P.; Li, X.L. Deep convolutional neural network based regression approach for estimation of remaining useful life. In Database Systems for Advanced Applications; Springer: Cham, Switzerland, 2016; pp. 214–228. [Google Scholar] [CrossRef]
  40. Bergmeir, C.; Hyndman, R.J.; Koo, B. A note on the validity of cross-validation for evaluating autoregressive time series prediction. Comput. Stat. Data Anal. 2018, 120, 70–83. [Google Scholar] [CrossRef]
  41. Wei, F.; Ma, X.; Qiu, Q.; Ma, Y.; Wang, J.; Yang, L. Adaptive Mission Risk Control under Incomplete Health Information and Resource Limitation: A Constrained Multi-State Predictive Maintenance Model. Reliab. Eng. Syst. Saf. 2025, 266, 111697. [Google Scholar] [CrossRef]
Figure 1. Posterior distribution of fractional order q for Scenario A (Classical DGP). The distribution concentrates around q = 1 , correctly identifying that no fractional adjustment is needed.
Figure 1. Posterior distribution of fractional order q for Scenario A (Classical DGP). The distribution concentrates around q = 1 , correctly identifying that no fractional adjustment is needed.
Mathematics 14 00169 g001
Figure 2. Posteriordistribution of fractional order q for Scenario B (Misspecified DGP). The distribution shifts to q = 0.86 , indicating adaptation to model misspecification.
Figure 2. Posteriordistribution of fractional order q for Scenario B (Misspecified DGP). The distribution shifts to q = 0.86 , indicating adaptation to model misspecification.
Mathematics 14 00169 g002
Table 1. Predictive performance metrics for synthetic data experiments. Lower values indicate better performance for WAIC and LOO-CV. Best results are shown in bold.
Table 1. Predictive performance metrics for synthetic data experiments. Lower values indicate better performance for WAIC and LOO-CV. Best results are shown in bold.
Scenario A (Classical)Scenario B (Misspecified)
Model WAIC LOO-CV 95% PPI Coverage WAIC LOO-CV 95 PPI Coverage
BFW1523.41524.10.9471689.21690.50.941
CBW1521.81522.30.9511723.61725.10.892
BGG1525.11525.90.9431702.71704.20.928
DWN1531.71532.40.9251718.91720.30.883
CLSTM1528.31529.10.9181708.51709.90.874
Table 2. Parameter recovery for Scenario A (Classical DGP). True values: β 0 = 1.0 , β 1 = 0.5 , β 2 = 0.8 , α = 1.8 .
Table 2. Parameter recovery for Scenario A (Classical DGP). True values: β 0 = 1.0 , β 1 = 0.5 , β 2 = 0.8 , α = 1.8 .
BFWCBWTrue Value
Parameter Mean 95% CI Mean 95% CI Value In CI?
β 0 1.02[0.89, 1.15]1.01[0.88, 1.14]1.00Yes
β 1 −0.48[−0.61, −0.35]−0.49[−0.62, −0.36]−0.50Yes
β 2 0.81[0.68, 0.94]0.82[0.69, 0.95]0.80Yes
α 1.76[1.62, 1.91]1.77[1.63, 1.92]1.80Yes
q1.02[0.94, 1.10]--1.00Yes
Table 3. Sensitivity analysis of deformation functions for Scenario B (Misspecified DGP).
Table 3. Sensitivity analysis of deformation functions for Scenario B (Misspecified DGP).
Deformation Function g ( q ) WAICLOO-CVPosterior Mean of q (95% CI)
g ( q ) = q (Proposed)1689.21690.50.86 [0.79, 0.93]
g ( q ) = q 1691.51692.80.81 [0.72, 0.90]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Awadalla, M.; Murugesan, M. Bayesian Fractional Weibull Regression for Reliability Prognostics and Predictive Maintenance. Mathematics 2026, 14, 169. https://doi.org/10.3390/math14010169

AMA Style

Awadalla M, Murugesan M. Bayesian Fractional Weibull Regression for Reliability Prognostics and Predictive Maintenance. Mathematics. 2026; 14(1):169. https://doi.org/10.3390/math14010169

Chicago/Turabian Style

Awadalla, Muath, and Manigandan Murugesan. 2026. "Bayesian Fractional Weibull Regression for Reliability Prognostics and Predictive Maintenance" Mathematics 14, no. 1: 169. https://doi.org/10.3390/math14010169

APA Style

Awadalla, M., & Murugesan, M. (2026). Bayesian Fractional Weibull Regression for Reliability Prognostics and Predictive Maintenance. Mathematics, 14(1), 169. https://doi.org/10.3390/math14010169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop