Next Article in Journal
Kinematic Analysis and Gait Planning of a Novel Rigid–Flexible Coupling Rolling Mechanism
Previous Article in Journal
Design and Experimental Research of a Track Vibration Energy Harvester Based on a Wideband Magnetic Levitation Structure
Previous Article in Special Issue
Convolutional Autoencoder-Based Method for Predicting Faults of Cyber-Physical Systems Based on the Extraction of a Semantic State Vector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State-Space Estimation in Discriminant Subspace: A Kalman Filtering Approach for Turbofan Engine RUL Prediction

Department of Electrical and Electronic Engineering, Faculty of Engineering, Adana Alparslan Türkeş Science and Technology University, Adana 01250, Türkiye
*
Author to whom correspondence should be addressed.
Machines 2026, 14(2), 226; https://doi.org/10.3390/machines14020226
Submission received: 14 January 2026 / Revised: 10 February 2026 / Accepted: 12 February 2026 / Published: 14 February 2026

Abstract

Accurate remaining useful life (RUL) prediction of turbofan engines is critical for aviation safety and maintenance optimization; however, deep learning approaches often lack interpretability and require extensive training data. This study proposes a framework integrating Linear Discriminant Analysis (LDA) with Kalman filtering for turbofan engine prognostics. The methodology projects high-dimensional sensor measurements onto a two-dimensional LDA subspace, where degradation trajectories are tracked using state-space estimation, with RUL predictions derived from distances to learned critical failure boundaries. A health index-based classification scheme partitions engine states into three operational regions: Critical, Warning, and Healthy. Three Kalman filter variants—Linear Kalman Filter (LKF), Extended Kalman Filter (EKF), and Unscented Kalman Filter (UKF)—were compared against an Autoregressive (AR) baseline using the NASA C-MAPSS dataset. Using the Prognostics and Health Management 2008 asymmetric scoring function, UKF achieved the best performance with a Score of 552572, representing a 54.9% improvement over AR (1224299), indicating substantially fewer late predictions. While RMSE values remained comparable across methods (36–37 cycles), the Kalman filter variants demonstrated meaningful improvements in avoiding dangerous late predictions critical for safety-oriented maintenance scheduling. EKF also demonstrated substantial improvement with 36.1% Score reduction. Classification accuracy improved from 70.72% (AR) to 73.27% (UKF). The proposed LDA–Kalman framework provides a computationally efficient and geometrically interpretable alternative to deep learning methods for real-time engine health monitoring.

1. Introduction

The aviation industry plays a pivotal role in global transportation, connecting economies and facilitating international commerce. With the continuous growth in air traffic—global passenger numbers reached approximately 4.7 billion in 2024, representing a full recovery to pre-pandemic levels [1]—ensuring the safety and reliability of aircraft systems has become increasingly critical. Among the most safety-critical components of commercial aircraft, turbofan engines demand particular attention due to their complex operational dynamics and the catastrophic consequences associated with in-flight failures [2,3].
Predictive maintenance (PdM) has emerged as a transformative paradigm in the aviation industry, fundamentally shifting maintenance strategies from reactive and scheduled approaches toward data-driven, condition-based methodologies [4,5]. Unlike traditional maintenance practices that rely on fixed time intervals or post-failure interventions, PdM leverages real-time sensor data, historical operational records, and advanced analytics to anticipate equipment degradation and predict failures before they occur [6,7]. This proactive approach offers substantial benefits including reduced unplanned downtime, optimized maintenance scheduling, extended component lifespans, and most importantly, enhanced flight safety [8,9].
The concept of RUL prediction lies at the heart of prognostics and health management (PHM) systems [9,10]. RUL represents the expected operational time remaining before a component or system reaches its failure threshold and requires maintenance intervention. Accurate RUL prediction enables maintenance planners to optimize the timing of interventions, balancing the competing objectives of maximizing asset utilization while minimizing failure risk [11,12]. For turbofan engines specifically, where component failures can have severe safety implications, reliable RUL estimation is essential for maintaining airworthiness and operational efficiency [13,14].
The NASA Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset has become the de facto benchmark for evaluating RUL prediction algorithms in the turbofan engine domain [2,13]. This simulated dataset provides run-to-failure trajectories for multiple engine units operating under various conditions and fault modes, enabling systematic comparison of different prognostic approaches. The dataset comprises four subsets (FD001–FD004) with increasing complexity in terms of operational conditions and fault types, offering a comprehensive testbed for algorithm development and validation [15].
Recent years have witnessed remarkable advances in data-driven approaches for RUL prediction, particularly through the application of deep learning techniques [16,17]. Long Short-Term Memory (LSTM) networks have demonstrated exceptional capability in capturing temporal dependencies within sequential sensor data [18,19]. Convolutional Neural Networks (CNNs) have been successfully employed for automatic feature extraction from raw sensor signals [17,20]. Hybrid architectures combining CNNs and LSTMs have shown improved performance by leveraging both spatial feature extraction and temporal modeling capabilities [21,22]. Attention mechanisms have further enhanced these models by enabling selective focus on the most relevant temporal patterns [23,24]. Transformer-based architectures have recently been explored for RUL prediction, demonstrating competitive performance through self-attention mechanisms [25,26]. Contemporary developments have further expanded these capabilities through hybrid approaches integrating large-kernel convolutions with Markov transition modeling [27], change point detection for operational condition variations [28], and comprehensive reviews of machine learning-based predictive maintenance methodologies [29], unsupervised autoencoder–GMM frameworks [30], multi-scale dilated fusion attention models [31], and dual-attention mechanisms for power machinery [32].
Despite the impressive results achieved by deep learning methods, they present several limitations that motivate the exploration of alternative approaches. Deep neural networks typically require large volumes of training data to achieve optimal performance—a requirement that may not be satisfied in practical scenarios where failure events are rare [33,34]. The black-box nature of deep learning models poses challenges for interpretability and trust in safety-critical applications [35,36]. Furthermore, these models often lack the ability to incorporate physical knowledge about degradation processes, potentially limiting their extrapolation capabilities [13,37].
To address these challenges, Kalman filtering techniques have emerged as a compelling alternative offering theoretical advantages across multiple dimensions. Unlike autoregressive models that treat observations independently with fixed coefficients, Kalman filters provide optimal recursive state estimation through dynamic adjustment based on measurement uncertainty [38]. For nonlinear degradation processes, the Extended Kalman Filter (EKF) employs first-order Taylor-series linearization [39], while the Unscented Kalman Filter (UKF) uses deterministic sigma point sampling for improved accuracy [40]. These methods have been successfully applied to RUL prediction in lithium-ion batteries [41,42,43], rolling bearings [44,45,46], and aircraft components [47,48].
When integrated with Linear Discriminant Analysis (LDA)-based dimensionality reduction, three synergistic benefits emerge for turbofan engine prognostics. First, computational efficiency: LDA projection from 21 sensors to 2 discriminant components enables O( n 3 ) Kalman operations with approximately 1000 × speedup compared to the original sensor space, facilitating real-time embedded implementation [47,48]. Second, enhanced robustness: the supervised LDA projection [49,50] concentrates discriminative signals into low-dimensional subspace while distributing sensor-specific noise across discarded dimensions—preventing misinterpretation of operational variability as degradation. Third, unified interpretability: hyperplane boundaries in LDA space [51,52] provide natural thresholds for multi-class health-state classification (nominal, warning, critical), while signed distances enable direct RUL regression. This geometrically meaningful representation, where degradation trajectories exhibit clear progression toward failure boundaries, enables simultaneous classification and regression with uncertainty quantification through covariance matrices—capabilities demonstrating effective for machinery fault diagnosis and condition monitoring [53,54,55].
Despite these advantages, Kalman filtering in discriminant subspaces remains underexplored for turbofan prognostics. Existing studies have combined Kalman filters with data-driven models [56,57], but direct application in LDA-projected spaces specifically designed for degradation monitoring represents a significant opportunity to leverage optimal state estimation where class separability is maximized.
Building upon the previous work of Yıldırım et al. [58], which proposed a unified linear classification–regression framework using LDA-based hyperplane boundaries, this study extends the methodology by incorporating Kalman filtering for improved state estimation. While [58] achieved RMSE values of 30.40–45.80 cycles across C-MAPSS sub-datasets through autoregressive modeling, that approach exhibited two key limitations: (1) sensitivity to measurement noise during transitional degradation phases, and (2) fixed coefficients unable to adapt to varying degradation rates. The present work addresses these limitations by replacing the regression component with optimal state-space estimation, providing adaptive noise filtering and dynamic parameter estimation while retaining the effective LDA-based dimensionality reduction and boundary-based classification.
The key novelty lies in integrating Kalman filtering within an LDA-projected discriminant subspace. Unlike conventional Kalman-based prognostics operating in high-dimensional sensor spaces [47,48] or physics-derived feature spaces [59], our method performs state estimation in a subspace where degradation trajectories align with maximum class separability. While LDA has been used for fault diagnosis [51,52,53,54,55], combining it with recursive state estimation for continuous RUL prediction enables unified classification and regression within a single framework. Compared to deep learning approaches [17,18,19,20,21,22,23,24], our method provides natural uncertainty quantification through Kalman covariances and interpretability through linear projection weights—capabilities requiring additional modifications (MC dropout, ensembles) in neural networks.
The main contributions of this work are summarized as follows:
  • A novel framework that integrates Linear Discriminant Analysis with Kalman filtering for turbofan engine RUL prediction, enabling simultaneous state estimation and boundary-based classification in a reduced-dimensional discriminant space.
  • A comprehensive comparative analysis of three Kalman filter variants (LKF, EKF, UKF) against an AR baseline, providing insights into the trade-offs between prediction accuracy and computational complexity.
  • Empirical validation on the NASA C-MAPSS benchmark dataset demonstrating that Kalman filter methods achieve RMSE improvements ranging from 2.4% (LKF) to 7.1% (UKF) and Score improvements ranging from 36.1% (EKF) to 54.9% (UKF) compared to the AR baseline.
  • Analysis of the bias characteristics of different estimation methods, revealing that EKF provides nearly unbiased predictions while UKF exhibits conservative bias suitable for safety-critical applications.
The remainder of this paper is organized as follows. Section 2 presents the materials and methods, including the problem formulation, LDA-based feature extraction, and detailed descriptions of the Kalman filter variants. Section 3 presents the experimental results and comparative analysis. Section 4 concludes the paper with a summary of contributions and directions for future research.

2. Materials and Methods

2.1. Problem Definition

This study considers turbofan engines monitored by D sensors, where D 2 . Let x = [ x 1 , x 2 , , x D ] T R D denote the vector of sensor readings at any given time. These sensors measure physical quantities such as temperature, pressure, and rotational speed at various engine cross-sections.
The predictive maintenance problem involves two interconnected objectives:
Firstly, classification objective: Partition the measurement space R D into three disjoint regions:
  • Nominal region R nom : Represents healthy operational states.
  • Warning region R warn : Indicates degradation onset.
  • Critical region R crit : Requires immediate maintenance action.
These regions satisfy R nom R warn R crit = R D and are mutually exclusive. The classification function is defined as
C ( x t ) = nominal if x t R nom warning if x t R warn critical if x t R crit
Secondly, regression objective: Given a sequence of sensor readings { x 1 , x 2 , , x t } with current state x t R nom R warn , estimate the RUL:
T ^ = min { T N : x t + T R crit }
In the presented framework, these regions are defined using hyperplane boundaries in LDA space. Let w 1 , w 2 R D and b 1 , b 2 R define two hyperplanes that partition the space:
R nom = { x R D : w 1 T x + b 1 < 0 }
R warn = { x R D : w 1 T x + b 1 0 and w 2 T x + b 2 < 0 }
R crit = { x R D : w 2 T x + b 2 0 }
The signed distances to these hyperplanes serve as features for RUL estimation:
d warn ( x t ) = w 1 T x t + b 1 w 1 2
d crit ( x t ) = w 2 T x t + b 2 w 2 2
These distances provide a natural connection between classification boundaries and continuous degradation estimation, ensuring consistency between discrete state classification and continuous RUL prediction.

2.2. Linear Discriminant Analysis for Degradation Feature Extraction

Linear Discriminant Analysis (LDA) provides a principled approach for dimensionality reduction that maximizes class separability [49,50]. By projecting high-dimensional sensor data onto a lower-dimensional subspace that maximizes the ratio of between-class scatter to within-class scatter, LDA creates a feature space where different operational states are optimally distinguishable [51,52]. This property is particularly valuable for degradation monitoring, where the goal is to track the progression from healthy to degraded states. Recent studies have demonstrated the effectiveness of LDA-based feature extraction for machinery fault diagnosis [53,54] and condition monitoring [55].
The LDA is employed to project the high-dimensional sensor measurements onto a lower-dimensional subspace that maximizes class separability. This projection serves two purposes: dimensionality reduction and enhanced discrimination between operational states.

2.2.1. Health Index-Based Class Definition

Unlike fixed-cycle thresholds commonly used in the literature, this study employs a normalized health index approach for class definition. The health index (HI) is computed by normalizing the RUL values with respect to the maximum observed lifetime across the dataset:
HI t = RUL t RUL max
where RUL max represents the maximum RUL observed across all training and test engine units. This normalization maps all RUL values to the interval [ 0 , 1 ] , where HI = 1 indicates a healthy engine at the beginning of its operational life and HI = 0 corresponds to failure.
The continuous health index is then discretized into K = 3 classes using predefined thresholds:
y t = 1 ( Critical ) if HI t < 0.1 2 ( Warning ) if 0.1 HI t < 0.3 3 ( Healthy ) if HI t 0.3
This health index-based classification partitions the operational space into three distinct regions:
  • Critical region ( R crit ): HI [ 0.0 , 0.1 ) —Engines approaching imminent critical failure requiring immediate maintenance intervention.
  • Warning region ( R warn ): HI [ 0.1 , 0.3 ) —Degraded engines requiring increased monitoring and maintenance planning.
  • Healthy region ( R healthy ): HI [ 0.3 , 1.0 ] —Engines operating within normal parameters.
The advantage of this normalized approach over fixed-cycle thresholds is that it inherently adapts to varying engine lifetimes across the dataset, providing consistent classification boundaries regardless of the absolute RUL scale.

2.2.2. LDA Formulation

Let { x i , y i } i = 1 N denote the training set where x i R D is the sensor measurement vector and y i { 1 , 2 , , K } is the class label. The within-class scatter matrix S W and between-class scatter matrix S B are defined as
S W = k = 1 K x i C k ( x i μ k ) ( x i μ k ) T
S B = k = 1 K N k ( μ k μ ) ( μ k μ ) T
where C k denotes the set of samples belonging to class k, N k = | C k | is the number of samples in class k, μ k = 1 N k x i C k x i is the mean of class k, and μ = 1 N i = 1 N x i is the global mean.
The LDA projection matrix W R D × ( K 1 ) is obtained by maximizing the Fisher criterion:
W * = arg max W | W T S B W | | W T S W W |
The solution is given by the generalized eigenvalue problem:
S B w = λ S W w
The projection matrix W * consists of the ( K 1 ) eigenvectors corresponding to the largest eigenvalues. For the three-class problem, this yields a two-dimensional LDA space.

2.2.3. Projection to LDA Space

Each sensor measurement vector is projected onto the discriminant subspace:
z t = W * T ( x t μ )
where z t R K 1 represents the coordinates in LDA space. The degradation trajectory { z 1 , z 2 , , z t } in this reduced space exhibits a systematic progression from the nominal region toward the critical region as the engine degrades.

2.2.4. RUL Estimation in LDA Space

In the LDA-projected space, the RUL can be estimated using the distance to the critical failure boundary. Assuming linear degradation dynamics, the trajectory in LDA space can be modeled as
z t + Δ t = z t + λ Δ t + ϵ t
where λ is the degradation rate vector and ϵ t represents process noise.
Given the current position z t and estimated degradation rate λ ^ , the RUL is computed as the time required to reach the critical boundary:
RUL ^ t = d crit ( z t ) λ ^ cos θ
where d crit ( z t ) is the signed distance to the critical boundary and θ is the angle between the degradation direction and the boundary normal. When the trajectory is approximately perpendicular to the boundary, a simplified computation yields
RUL ^ t d crit ( z t ) λ ^
The key advantage of performing RUL estimation in LDA space is that the projection inherently aligns the degradation trajectory with the direction of maximum class separation, making the distance-based RUL estimation more robust to sensor noise and correlated measurements.

2.2.5. Rationale for LDA-Based Projection

The selection of LDA over alternative dimensionality reduction techniques is motivated by three key considerations. First, unlike PCA which maximizes variance irrespective of class structure, LDA’s supervised optimization directly aligns the projection with health-state separability—essential for degradation tracking where operational variations may dominate total variance [49]. Second, nonlinear methods such as t-SNE and UMAP lack parametric projection functions, preventing real-time application to new test samples and producing stochastic embeddings that compromise reproducibility [60]. Third, LDA’s linear projection preserves Gaussian noise characteristics, ensuring Kalman filter optimality in the projected space. Additionally, the hyperplane boundaries inherent in LDA enable direct geometric interpretation of distance-to-failure as RUL (Equation (7)), unifying classification and regression within a single framework.
Validity of Linear Separability Assumptions: While degradation processes may exhibit nonlinear characteristics in the original sensor space, the LDA projection empirically yields approximately linear degradation trajectories in the reduced space. As shown in Figure 1, the near-parallel decision boundaries confirm that degradation paths follow approximately linear trajectories through the discriminant space. This linearization occurs because LDA’s optimization objective inherently straightens class boundaries by maximizing between-class separation.
Robustness to Non-Gaussian Characteristics: The framework addresses potential non-Gaussian deviations through the hybrid LDA–Kalman approach. While LDA performs dimensionality reduction under locally linear assumptions, the nonlinear Kalman filters (EKF, UKF) capture accelerating wear behavior via the exponential degradation model (Equation (42)). The UKF’s sigma point sampling is particularly effective for non-Gaussian distributions, capturing statistics accurately to second order without requiring explicit Gaussian assumptions [40]. The consistent performance across all four C-MAPSS sub-datasets—encompassing different operational conditions and fault modes—provides empirical validation that the linear separability assumption holds sufficiently well for practical turbofan engine prognostics.

2.2.6. Advantages of Kalman Filtering in LDA Space

Computational Efficiency: The Kalman filter has O ( n 3 ) complexity per update. In the original sensor space with D = 21 sensors augmented with velocity states, operations on a 42-dimensional state vector would be required. The LDA projection reduces this to n = 4 (LKF) or n = 3 (EKF/UKF), yielding a complexity reduction exceeding 1000 × and enabling real-time implementation on embedded systems.
Robustness to Noise: In the original sensor space, measurement noise from 21 sensors propagates independently, potentially obscuring degradation trends. The LDA projection concentrates discriminative signal into the low-dimensional subspace while distributing sensor-specific noise across discarded dimensions. The projected noise covariance R LDA = W * T R orig W * retains only noise components aligned with discriminant directions, effectively filtering uncorrelated sensor noise.
Geometrical Interpretability: State estimation in the original 21-dimensional space yields filtered values with complex interdependencies lacking intuitive interpretation. In contrast, the two-dimensional LDA state corresponds to positions along maximum health-state separability directions, where the signed distance to critical boundaries (Equation (7)) provides an immediately geometrically interpretable degradation severity measure.

2.3. Linear Kalman Filter

The Linear Kalman Filter (LKF) provides optimal state estimation for linear systems with Gaussian noise. In the context of RUL prediction, LKF is applied to filter the noisy degradation trajectory in LDA space.

2.3.1. State-Space Model

The degradation dynamics in LDA space are modeled as a linear state-space system:
z t + 1 = A z t + B u t + w t
y t = C z t + v t
where z t R n is the state vector, y t R m is the observation vector, u t is the control input, A R n × n is the state transition matrix, B R n × p is the control input matrix, and C R m × n is the observation matrix. The process noise w t N ( 0 , Q ) and measurement noise v t N ( 0 , R ) are assumed to be zero-mean Gaussian with covariance matrices Q and R , respectively.

2.3.2. Prediction Step

The a priori state estimate and error covariance are computed as
z ^ t | t 1 = A z ^ t 1 | t 1 + B u t 1
P t | t 1 = A P t 1 | t 1 A T + Q
where z ^ t | t 1 denotes the predicted state at time t given observations up to t 1 , and P t | t 1 is the corresponding error covariance matrix.

2.3.3. Update Step

When a new observation y t becomes available, the Kalman gain and posterior estimates are computed:
K t = P t | t 1 C T ( C P t | t 1 C T + R ) 1
z ^ t | t = z ^ t | t 1 + K t ( y t C z ^ t | t 1 )
P t | t = ( I K t C ) P t | t 1
where K t is the Kalman gain that optimally weights the innovation ( y t C z ^ t | t 1 ) .
For the constant velocity model in LDA space, the state vector is defined as
z t = z 1 , t z 2 , t z ˙ 1 , t z ˙ 2 , t
where z 1 , t and z 2 , t are the LDA coordinates, and z ˙ 1 , t , z ˙ 2 , t are their respective velocities.
The state transition matrix with unit time step ( Δ t = 1 ) is
A = 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0 1
The observation matrix extracts position components:
H = 1 0 0 0 0 1 0 0
The process noise covariance matrix is configured as
Q LKF = diag ( q pos , q pos , q vel , q vel ) = diag ( 0.5 , 0.5 , 0.001 , 0.001 )
where q pos = 0.5 represents position process noise and q vel = 0.001 represents velocity process noise.
The measurement noise covariance R is estimated adaptively from the first n init = 20 observations:
R = diag max ( σ z 1 2 , 0.001 ) , max ( σ z 2 2 , 0.001 )
The initial state covariance is set to
P 0 = diag ( 1 , 1 , 0.1 , 0.1 )

2.3.4. Application to RUL Prediction

For degradation modeling in LDA space, the state vector is augmented to include both position and degradation rate:
z t = z 1 , t z 2 , t , A = 1 0 0 1 , C = 1 0 0 1
The filtered state estimate z ^ t | t provides a smoothed trajectory in LDA space, from which the RUL is computed using the distance to the critical boundary as described in Equation (17).

2.4. Extended Kalman Filter

For nonlinear degradation processes, the Extended Kalman Filter (EKF) employs first-order Taylor-series linearization [39], while the Unscented Kalman Filter (UKF) uses deterministic sigma point sampling for improved accuracy [40]. Additionally, the EKF extends the Linear Kalman Filter framework to handle nonlinear state transition and observation models. This is particularly relevant for degradation processes that exhibit nonlinear behavior, such as exponential wear patterns commonly observed in turbofan engines.

2.4.1. Nonlinear State-Space Model

The nonlinear system dynamics are described by
z t + 1 = f ( z t , u t ) + w t
y t = h ( z t ) + v t
where f ( · ) and h ( · ) are nonlinear state transition and observation functions, respectively. The noise terms w t N ( 0 , Q ) and v t N ( 0 , R ) remain Gaussian.

2.4.2. Linearization via Taylor-Series Expansion

The EKF approximates the nonlinear functions using first-order Taylor-series expansion around the current state estimate. The Jacobian matrices are computed as
F t = f z z = z ^ t | t
H t = h z z = z ^ t | t 1
where F t R n × n is the Jacobian of the state transition function and H t R m × n is the Jacobian of the observation function.

2.4.3. Prediction Step

The a priori state estimate is propagated through the nonlinear function, while the covariance uses the linearized model:
z ^ t | t 1 = f ( z ^ t 1 | t 1 , u t 1 )
P t | t 1 = F t 1 P t 1 | t 1 F t 1 T + Q

2.4.4. Update Step

The measurement update follows the standard Kalman filter form with the linearized observation model:
K t = P t | t 1 H t T ( H t P t | t 1 H t T + R ) 1
z ^ t | t = z ^ t | t 1 + K t ( y t h ( z ^ t | t 1 ) )
P t | t = ( I K t H t ) P t | t 1

2.4.5. Extended Kalman Filter Parameter Configuration

The EKF employs an exponential degradation model with the state vector:
x t = HI t λ t z 2 , t
where HI t [ 0 , 1 ] is the normalized health index (derived from LDA component 1), λ t is the degradation rate parameter, and z 2 , t is the second LDA component.
The nonlinear state transition follows
HI t + 1 = HI t · exp ( λ t · Δ t )
The Jacobian matrix for linearization is
F t = exp ( λ t Δ t ) HI t Δ t exp ( λ t Δ t ) 0 0 1 0 0 0 1
The process noise covariance matrix is
Q EKF = diag ( q HI , q λ , q z 2 ) = diag ( 0 , 2 × 10 4 , 0.001 )
State constraints are enforced at each iteration:
HI t [ 0.01 , 1.0 ]
λ t [ 0 , 0.1 ]
Initial state estimates:
x 0 = max ( 0.5 , min ( 1 , HI meas , 0 ) ) 0.002 z 2 , meas , 0
Initial covariance:
P 0 = diag ( 0.1 , 0.0001 , 1 )

2.4.6. Degradation Model for RUL Prediction

For turbofan engine degradation, an exponential model captures the accelerating wear behavior:
f ( z t , λ ) = z t + λ e α t δ
where λ is the degradation rate parameter, α controls the acceleration of degradation, and δ is the unit direction vector toward the critical region. The corresponding Jacobian is
F t = I + α λ e α t δ z
The EKF provides improved state estimation when the degradation dynamics deviate from linearity, at the cost of requiring Jacobian computation at each time step.

2.5. Framework Overview

Figure 2 illustrates the complete LDA–Kalman framework for turbofan engine prognostics, consisting of two main phases: training and testing.
During the training phase, raw sensor measurements from run-to-failure engine trajectories are first preprocessed to remove zero-variance sensors that provide no discriminative information. The health index is then computed by normalizing RUL values with respect to the maximum observed lifetime across the dataset. Based on predefined thresholds (0.1 and 0.3), each sample is assigned to one of three health states: Critical, Warning, or Healthy. Linear Discriminant Analysis is applied to learn the optimal projection matrix W * that maximizes class separability in a two-dimensional discriminant space. Finally, linear regression coefficients are estimated to map LDA coordinates to normalized RUL values.
During the testing phase, sensor measurements from a test engine are projected onto the learned LDA subspace using W * . Four parallel filtering approaches are then applied to the projected coordinates: the AR model serves as a baseline, while three Kalman filter variants (LKF, EKF, and UKF) provide state-space estimation with different modeling assumptions. The filtered LDA coordinates z ^ t are used for two simultaneous outputs: RUL prediction through the learned regression model, and health-state classification based on the position relative to class boundaries. This unified framework enables both continuous degradation monitoring and discrete maintenance decision support within a single computationally efficient architecture.

2.6. Unscented Kalman Filter

The Unscented Kalman Filter uses deterministic sigma point sampling to achieve improved accuracy for highly nonlinear systems without requiring explicit calculation of Jacobian matrices [40]. The method has demonstrated effectiveness across diverse applications in prognostics and health management [42,45,48]. Additionally, UKF addresses the limitations of EKF linearization by using a deterministic sampling approach called the unscented transform. Instead of linearizing the nonlinear functions, UKF propagates a set of carefully chosen sigma points through the true nonlinear model, providing more accurate mean and covariance estimates for highly nonlinear systems.

2.6.1. Sigma Point Generation

For a state vector z ^ R n with covariance P , a set of 2 n + 1 sigma points is generated:
χ ( 0 ) = z ^
χ ( i ) = z ^ + ( n + κ ) P i , i = 1 , , n
χ ( i + n ) = z ^ ( n + κ ) P i , i = 1 , , n
where ( n + κ ) P i denotes the i-th column of the matrix square root, and κ is a scaling parameter typically set as κ = 3 n for Gaussian distributions.
The associated weights for mean and covariance computation are
W m ( 0 ) = κ n + κ , W c ( 0 ) = κ n + κ + ( 1 α 2 + β )
W m ( i ) = W c ( i ) = 1 2 ( n + κ ) , i = 1 , , 2 n
where α controls the spread of sigma points and β incorporates prior knowledge of the distribution (typically β = 2 for Gaussian).

2.6.2. Prediction Step

Each sigma point is propagated through the nonlinear state transition function:
χ t | t 1 ( i ) = f ( χ t 1 | t 1 ( i ) , u t 1 ) , i = 0 , , 2 n
The predicted mean and covariance are computed from the transformed sigma points:
z ^ t | t 1 = i = 0 2 n W m ( i ) χ t | t 1 ( i )
P t | t 1 = i = 0 2 n W c ( i ) ( χ t | t 1 ( i ) z ^ t | t 1 ) ( χ t | t 1 ( i ) z ^ t | t 1 ) T + Q

2.6.3. Update Step

The sigma points are transformed through the observation function:
γ t ( i ) = h ( χ t | t 1 ( i ) ) , i = 0 , , 2 n
The predicted observation mean, innovation covariance, and cross-covariance are
y ^ t | t 1 = i = 0 2 n W m ( i ) γ t ( i )
P y y = i = 0 2 n W c ( i ) ( γ t ( i ) y ^ t | t 1 ) ( γ t ( i ) y ^ t | t 1 ) T + R
P z y = i = 0 2 n W c ( i ) ( χ t | t 1 ( i ) z ^ t | t 1 ) ( γ t ( i ) y ^ t | t 1 ) T
The Kalman gain and state update are
K t = P z y P y y 1
z ^ t | t = z ^ t | t 1 + K t ( y t y ^ t | t 1 )
P t | t = P t | t 1 K t P y y K t T

2.6.4. Unscented Kalman Filter Parameter Configuration

The UKF uses the same state vector and exponential degradation model as EKF but employs sigma point sampling instead of linearization.
Sigma Point Parameters:
α = 0.5 ( spread of sigma points )
β = 2 ( optimal for Gaussian distributions )
κ = 0 ( sec ondary scaling parameter )
The composite scaling parameter:
λ UKF = α 2 ( n + κ ) n = 0 . 5 2 × ( 3 + 0 ) 3 = 2.25
where n = 3 is the state dimension.
Sigma Point Weights:
W m ( 0 ) = λ UKF n + λ UKF = 2.25 0.75 = 3.0
W c ( 0 ) = W m ( 0 ) + ( 1 α 2 + β ) = 3.0 + 2.75 = 0.25
W m ( i ) = W c ( i ) = 1 2 ( n + λ UKF ) = 1 1.5 0.667 , i = 1 , , 2 n
The negative mean weight W m ( 0 ) = 3.0 arises from the chosen scaling parameters ( α = 0.5 ,   K = 0 ) and is mathematically valid within the unscented transform framework. While unusual, negative weights do not cause numerical instability when the sigma point spread is appropriately constrained. The key requirement is that the weighted combination of sigma points accurately captures the mean and covariance of the transformed distribution. To mitigate potential numerical issues,
  • The Joseph form covariance update (Equation (74)) is employed, which guarantees positive semi-definiteness regardless of weight signs.
  • A minimum eigenvalue threshold ϵ m i n = 10 8 enforces numerical stability.
  • State constraints (Equations (45) and (46)) prevent physically impossible values that could arise from extrapolation.
Process Noise Covariance:
Q UKF = diag ( q HI , q λ , q z 2 ) = diag ( 0.01 , 0.1 , 0.01 )
Note that the UKF process noise values differ from EKF to account for the sigma point propagation characteristics.
Numerical Stability: To ensure positive definiteness of the covariance matrix, a minimum eigenvalue threshold ϵ min = 10 8 is enforced. The Joseph form update is used:
P t | t = ( I K t H ) P t | t 1 ( I K t H ) T + K t R K t T

2.6.5. Advantages of Kalman Filter Variants for RUL Prediction

Each Kalman filter variant offers distinct advantages for degradation modeling in the LDA-projected space.
The LKF provides optimal state estimation for linear systems with Gaussian noise, offering theoretical guarantees on estimation accuracy. It is computationally efficient with O ( n 3 ) complexity, making it suitable for real-time implementation on embedded systems. The LKF requires minimal parameter tuning, with only process and measurement noise covariances to be specified, and its constant velocity model effectively captures gradual degradation trends in LDA space.
The EKF accommodates nonlinear degradation dynamics through first-order Taylor-series linearization. The exponential degradation model naturally captures accelerating wear behavior commonly observed in turbofan engines. A key advantage of EKF is its ability to jointly estimate the health index and degradation rate parameter λ , providing insight into critical failure progression speed while maintaining computational efficiency comparable to LKF.
The UKF eliminates the need for Jacobian computation, simplifying implementation for complex degradation models. The unscented transform captures mean and covariance accurately to at least second order for any nonlinearity, compared to first-order accuracy of EKF. Additionally, the sigma point approach naturally handles asymmetric degradation distributions that may arise near critical boundaries, providing more robust estimation when degradation dynamics deviate significantly from linearity, particularly during rapid state transitions.
The selection among these variants depends on the trade-off between computational complexity and modeling fidelity required for the specific application.

2.6.6. Parameter Selection Methodology

The process noise covariance matrices for all Kalman filter variants were determined through grid search optimization on the training dataset, minimizing RMSE as the objective function. For LKF, the search covered q p o s { 0.001 , 0.005 , 0.01 , 0.05 , 0.1 } and q v e l { 10 4 , 5 × 10 4 , 10 3 , 5 × 10 3 , 10 2 } , yielding 25 combinations. For EKF, a two-phase approach tuned q H I { 10 4 , 5 × 10 4 , 10 3 , 5 × 10 3 , 10 2 } and q λ { 10 6 , 5 × 10 6 , 10 5 , 5 × 10 5 , 10 4 } first, followed by q z 2 { 0.001 , 0.005 , 0.01 , 0.05 } . UKF employed identical Q parameter ranges with sigma point parameters fixed at α = 0.5 , β = 2 , κ = 0 following standard guidelines [40]. The measurement noise covariance R was estimated adaptively from the initial 20 observations of each trajectory (Equation (29)), eliminating manual tuning. Sensitivity analysis showed that varying q p o s by factors of 0.1 × to 10 × from optimal values resulted in RMSE changes within ± 5 % , indicating moderate robustness to parameter selection.

3. Results

This section presents the experimental evaluation of the proposed LDA–Kalman framework on the NASA C-MAPSS dataset, comparing four estimation methods: Autoregressive (AR) baseline, Linear Kalman Filter (LKF), Extended Kalman Filter (EKF), and Unscented Kalman Filter (UKF).

3.1. Experimental Setup

The experiments were conducted using all four C-MAPSS sub-datasets (FD001–FD004) combined into a unified framework. The combined dataset comprises 709 training units with 160,359 total samples and 707 test engines. For normalization purposes, the maximum RUL across the dataset was determined to be 554 cycles.

3.1.1. Health Index Classification Thresholds

The health index-based classification partitions the measurement space into three operational regions based on normalized RUL values:
  • Critical (Class 1): RUL < 55.4 cycles (normalized RUL < 0.1).
  • Warning (Class 2): 55.4 ≤ RUL < 166.2 cycles (0.1 ≤ normalized RUL < 0.3).
  • Healthy (Class 3): RUL ≥ 166.2 cycles (normalized RUL ≥ 0.3).
Table 1 presents the training and test data distributions across the three health-state classes.

3.1.2. LDA Projection Parameters

The Linear Discriminant Analysis projection yielded two discriminant components with eigenvalues λ 1 = 2.8636 and λ 2 = 0.1149 , indicating that the first component captures significantly more between-class variance. The LDA coordinate ranges were as follows:
  • LDA Component 1: [2277.12, 2288.31].
  • LDA Component 2: [−2049.43, −2034.24].
The normalized RUL regression coefficients in LDA space were determined as b = [ 0.0694 , 0.0212 , 201.45 ] T .

3.1.3. LDA Space Visualization

Figure 1 presents the two-dimensional LDA projection of all 707 test engines at their final measurement cycle. The visualization displays the spatial distribution of health states (Critical, Warning, Nominal) along with the learned decision boundaries for each filtering method.
The eigenvalue ratio ( λ 1 = 2.8636 , λ 2 = 0.1149 ) indicates that the first discriminant component captures 96.1% of the between-class variance, explaining the horizontal arrangement of class centroids along LDA Component 1. The 95% confidence ellipses reveal substantial overlap between adjacent classes, reflecting the continuous nature of the degradation process. Despite this inherent overlap, the UKF achieves the highest classification accuracy (73.3%) by providing robust state estimates that reduce misclassification near decision boundaries. The near-parallel orientation of the two boundaries confirms that degradation trajectories follow approximately linear paths through the discriminant space.

3.1.4. Analysis of Sensor Contributions

A sensitivity analysis was conducted to identify the most influential sensors contributing to the health index (HI) estimation. As illustrated in Figure 3 (left), the analysis reveals that five sensors dominate the HI calculation—S15 (32%), S18 (18%), S4 (17%), S5 (10%), and S9 (7%)—collectively accounting for 68% of the total contribution, while the remaining sensors contribute 16%. This finding suggests that monitoring these critical sensors could provide efficient degradation assessment without requiring the full sensor suite. The LDA component analysis (Figure 3, right) demonstrates that LDA1 is the dominant component, explaining approximately 60% of the HI variance and exhibiting a strong correlation of 78% with the normalized RUL. In contrast, LDA2 contributes minimally to variance (<5%) but maintains a moderate correlation of 15% with RUL, indicating its supplementary role in capturing secondary degradation patterns.

3.2. Performance Metrics

The methods are evaluated using five performance metrics:
  • Root Mean Square Error (RMSE): Measures the average magnitude of prediction errors in cycles:
    RMSE = 1 n i = 1 n ( R U L ^ i R U L i t r u e ) 2
  • Normalized RMSE (RMSEn): RMSE normalized by the maximum RUL (554 cycles).
  • Mean Absolute Error (MAE): Provides a linear measure of average prediction error magnitude:
    MAE = 1 n i = 1 n | R U L ^ i R U L i t r u e |
  • Score: The PHM08 asymmetric scoring function that penalizes late predictions more heavily than early predictions:
    S = i = 1 n s i , where s i = e d i / 13 1 if d i < 0 e d i / 10 1 if d i 0
    where d i = R U L ^ i R U L i t r u e is the prediction error.
  • Classification Accuracy: The percentage of correctly classified engine health states among the three regions.

3.3. Quantitative Comparison of Proposed Methods

Table 2 summarizes the performance of all four methods on the combined C-MAPSS dataset.
The Extended Kalman Filter achieved the lowest RMSE (36.43 cycles), while the Unscented Kalman Filter achieved the best performance in both the Score metric (552,572) and classification accuracy (73.27%). The UKF’s substantially lower Score value indicates superior performance in avoiding late predictions, which is critical for safety-oriented maintenance scheduling.

Improvement Analysis

Table 3 presents the relative improvement of Kalman filter methods over the AR baseline.
The results demonstrate that while RMSE values remain comparable across all methods, the nonlinear Kalman filter variants (EKF and UKF) achieve substantial improvements in the Score metric. The UKF provides a 54.9% reduction in Score compared to the AR baseline, indicating significantly fewer late predictions that could lead to unexpected failures. The EKF also demonstrates substantial improvement with 36.1% Score reduction while achieving the lowest absolute RMSE.

3.4. Training Data Analysis

Figure 4 presents the training data distribution in LDA space, demonstrating the effectiveness of LDA-based dimensionality reduction for health-state separation.
The LDA projection successfully separates the three health states into distinct regions, with clear boundaries visible between Critical, Warning, and Healthy classes. The normalized RUL color gradient confirms that degradation progresses systematically from the upper-right (healthy) to lower-left (critical) regions of the LDA space.

3.5. Error Distribution Analysis

Figure 5 presents the distribution of prediction errors across all methods using box plots. The error statistics reveal distinct bias characteristics for each method.
Table 4 summarizes the error distribution statistics for each method.
The near-zero mean error of EKF (1.7 cycles) indicates that this method provides essentially unbiased RUL estimates. The negative bias of UKF (−1.2 cycles) reflects a conservative estimation strategy that predicts slightly earlier maintenance needs, which may be preferable in safety-critical aviation applications where the consequences of late predictions outweigh those of early maintenance interventions.
Figure 6 presents the error histograms for each method, providing a detailed view of the error distribution shapes.

3.6. Classification Performance

Figure 7 presents the confusion matrices for the three-class classification task across all methods.
Key observations from the classification results:
  • All methods achieve the highest accuracy in classifying Warning states, which constitute the majority of test samples (58.6%).
  • The Healthy class presents the greatest classification challenge due to its small sample size (6.1% of test data).
  • UKF demonstrates improved Critical state detection (175 correct out of 250) compared to AR (147 correct), which is crucial for identifying engines requiring immediate maintenance attention.
  • The Kalman filter methods show progressive improvement in Critical state classification accuracy, with UKF achieving the best performance.

3.7. Prediction Accuracy Visualization

Figure 8 shows scatter plots of predicted versus true normalized RUL values for all methods. Points closer to the diagonal line indicate more accurate predictions.
The scatter plots reveal that all methods exhibit similar patterns of prediction accuracy, with tighter clustering around the ideal prediction line for mid-range normalized RUL values (0.1–0.4). The three-zone classification boundaries demonstrate the alignment between continuous RUL prediction and discrete health-state classification.

3.8. Trajectory Filtering Performance

The selected engine represents a typical degradation scenario with 184 operational cycles spanning all three health states. In the LDA-projected space (Figure 9a), the raw measurements exhibit considerable scatter due to sensor noise and operational variations, while the filtered trajectories clearly reveal the systematic progression toward the critical failure region. The temporal analysis (Figure 9b) confirms that all filtering methods successfully extract the underlying degradation signal, with the primary LDA component decreasing approximately 4 units over the engine’s lifetime. The Kalman filter variants provide visibly smoother trajectories compared to raw measurements, demonstrating effective noise suppression.
Health-state classification (Figure 10) demonstrates performance across all methods, correctly identifying the Healthy→Warning transition around cycle 30 and the Warning→Critical transition near cycle 160, with only minor misclassifications during the initial healthy phase.
RUL predictions (Figure 11) from all methods closely follow the true degradation curve, with convergence improving as the engine approaches failure. The Critical and Warning thresholds (0.1 and 0.3 normalized RUL) provide clear decision points for maintenance planning.
The trajectory analysis demonstrates the following:
  • Raw sensor measurements exhibit substantial noise in LDA space, while filtered trajectories from LKF, EKF, and UKF provide smoothed degradation estimates.
  • The health index curves from EKF and UKF show monotonic decline from approximately 1.0 (healthy) to near 0 (critical failure).
  • Normalized RUL predictions track the true degradation curve closely, with all methods converging near the actual critical point.
  • The degradation rate parameter ( λ ) estimated by EKF and UKF shows adaptive behavior, with increasing rates as the engine approaches critical region.

3.9. Ablation Experiments

To quantify the contribution of LDA dimensionality reduction to the overall framework performance, we conducted ablation experiments comparing Kalman filter variants operating directly in the original 21-dimensional sensor space (Without LDA) versus the 2-dimensional LDA-projected space. Table 5 presents the results across all four C-MAPSS sub-datasets.
The ablation results reveal distinct patterns depending on the dataset complexity:
FD001 (Single Condition, Single Fault): Without LDA, RMSE is 31.22–31.87 cycles with 60–63% accuracy. LDA improves RMSE by 14–15% (to 26.83–26.99 cycles), accuracy by 5–8 points, and Score by 40%.
FD002 (Six Conditions, Single Fault): Without LDA, RMSE exceeds 7400 cycles with infinite Score and 20% accuracy—effectively unusable predictions. LDA reduces RMSE to 31.84–32.91 cycles (>99.5% improvement) and increases accuracy to 72–74%. The six operating conditions create variance that obscures degradation signals; LDA filters this operational variability.
FD003 (Single Condition, Two Faults): Without LDA, RMSE is 56.31–56.96 cycles with Score ∼ 10 8 . LDA reduces RMSE by 27–28% (to 40.75–41.24 cycles), Score by 99% (to ∼ 10 6 ), and improves accuracy from 57% to 62–63%.
FD004 (Six Conditions, Two Faults): The most complex dataset shows the most dramatic results. Without LDA, RMSE exceeds 13,000 cycles with infinite Score and 23% accuracy. LDA reduces RMSE to 39.21–40.03 cycles (>99.7% improvement) and increases accuracy to 68–69%.
The ablation study demonstrates that LDA dimensionality reduction is not merely an optional preprocessing step but a fundamental component of the framework. For single-condition datasets (FD001, FD003), LDA provides moderate but consistent improvements of 14–28% in RMSE. For multi-condition datasets (FD002, FD004), LDA is absolutely essential—without it, the Kalman filtering approach fails catastrophically. This occurs because operational variations dominate the variance structure of the original sensor space, preventing the filters from tracking degradation trajectories. LDA’s supervised projection specifically maximizes health-state separability, creating a reduced-dimensional space where degradation dynamics can be effectively modeled regardless of operating conditions.

3.10. Computational Cost Analysis

To evaluate the computational efficiency of the proposed framework, comprehensive timing measurements were conducted for both training and inference phases. All experiments were performed on a desktop workstation equipped with an Intel Core i7-12700KF processor (3.60 GHz, 12 cores, 25 MB cache) running MATLAB R2024B.
The LDA-based feature extraction processes 160,359 training samples in 1.3617 s (8.49 μ s per sample), enabling rapid model development and periodic recalibration. Table 6 presents the inference phase computational costs for the test dataset comprising 707 engines and 104,897 operational cycles.
The results reveal significant computational advantages of the Kalman filter variants operating in the LDA-projected space. Extended Kalman Filter (EKF) achieves the fastest inference time at 1.70 μ s per cycle (78% of LKF complexity), despite incorporating nonlinear dynamics through Jacobian computations. The low dimensionality of the LDA space (2D) ensures minimal matrix operation overhead. Unscented Kalman Filter (UKF) requires 10.26 μ s per cycle (4.69× LKF), with increased cost stemming from sigma point propagation (5 sigma points for 2D state space). The Autoregressive (AR) Model exhibits unexpectedly high computational cost at 345.6 μ s per cycle (157.9× LKF) due to dynamic matrix reconstruction at each time step.
All Kalman filter variants demonstrate strict real-time capability. With typical sensor sampling intervals of 1 Hz (1,000,000 μ s), even UKF occupies less than 0.001% of the computational budget. For Engine 20 (184 cycles), EKF completes trajectory analysis in 0.298 ms, enabling deployment on resource-constrained embedded systems.

3.11. Comparison with State-of-the-Art Methods

Table 7 reveals distinct performance tiers across method categories. Deep learning methods, particularly Transformer-based architectures (TTSNet, EGG-Transformer, TCRSCANet), achieve the lowest RMSE values (11–18 cycles on individual sub-datasets), benefiting from end-to-end feature learning and extensive hyperparameter optimization on each sub-dataset separately. Traditional machine learning methods (SVR, RVR) show moderate performance (20–45 cycles), while the proposed LDA–Kalman framework yields 26–41 cycles across sub-datasets. Notably, our combined dataset evaluation (36.43–36.69 cycles) represents a more challenging scenario than individual sub-dataset results, as the model must generalize across different operational conditions (FD002, FD004) and fault modes without dataset-specific tuning. Compared to the only other Kalman-based method (UKF+LR), the proposed approach shows higher RMSE on individual sub-datasets (FD002: 31.84 vs. 21.50, FD003: 40.75 vs. 22.40); however, UKF+LR was optimized separately for each sub-dataset, whereas our unified framework uses identical parameters across all four sub-datasets while additionally providing health-state classification capability.
The comparison reveals that while deep learning methods (particularly Transformer-based architectures) achieve lower RMSE values on individual sub-datasets, the proposed LDA–Kalman framework offers several practical advantages:
  • Interpretability: The linear nature of LDA projection and Kalman filtering provides transparent insight into how sensor measurements influence predictions through the learned hyperplane coefficients.
  • Computational Efficiency: The framework requires no GPU acceleration and can process predictions in real time on embedded systems, with O ( n 3 ) complexity for Kalman filtering, where n is the state dimension (3 for this implementation).
  • Unified Classification–Regression: Unlike most deep learning approaches that focus solely on RUL regression, the proposed method simultaneously provides health-state classification with accuracy up to 73.27%.
  • Uncertainty Quantification: Kalman filter covariance matrices provide natural uncertainty bounds on predictions, enabling risk-informed maintenance decision-making.
  • Robustness: The framework demonstrates consistent performance across combined datasets with varying operational conditions and fault modes, without requiring dataset-specific hyperparameter tuning.
  • Reduced Data Requirements: Unlike deep learning methods that require large training datasets, the proposed framework can operate effectively with limited failure instances.

3.12. Summary of Findings

The experimental results demonstrate several key findings regarding the performance of Kalman filter variants in the LDA-projected discriminant space. In terms of the asymmetric Score metric, the nonlinear Kalman filter variants (EKF and UKF) improve upon the AR baseline, with improvements ranging from 36.1% (EKF) to 54.9% (UKF), indicating significantly fewer dangerous late predictions that could lead to unexpected failures. Regarding point prediction accuracy, the EKF achieves the best performance with the lowest RMSE (36.43 cycles) and near-zero mean bias (1.7 cycles), suggesting that first-order linearization adequately captures the degradation dynamics in LDA space. The UKF provides the most conservative predictions with the lowest Score (552,572) and highest classification accuracy (73.27%), making it particularly suitable for safety-critical applications where late predictions must be minimized. Furthermore, classification accuracy remains consistent across all methods (70.72–73.27%), indicating that the LDA-based boundary classification is robust to the choice of filtering method and that the learned hyperplane boundaries effectively partition the discriminant space regardless of the state estimation approach employed.

4. Conclusions

This paper presented a novel framework that integrates Linear Discriminant Analysis with Kalman filtering for turbofan engine remaining useful life prediction. The key innovation lies in performing state-space estimation directly within the LDA-projected discriminant subspace, where degradation trajectories are inherently aligned with directions of maximum class separability. This approach fundamentally differs from existing methods that either apply Kalman filtering in the original high-dimensional sensor space or use LDA solely for classification purposes. By unifying dimensionality reduction, state estimation, and boundary-based classification within a single coherent framework, the proposed methodology enables optimal noise filtering in a geometrically meaningful space where distances to critical failure boundaries directly correspond to remaining useful life.
Building upon the previous work on linear classification–regression methods, this study specifically addressed the sensitivity of Autoregressive models to measurement noise during transitional degradation phases. The experimental evaluation on the NASA C-MAPSS benchmark dataset demonstrated that Kalman filter variants achieve substantial reductions in the asymmetric PHM08 Score (36–55%) while maintaining comparable RMSE performance to the AR baseline. These results indicate that the primary benefit lies in conservative prediction bias—fewer late predictions—rather than improved point accuracy. Deep learning methods achieve 2–3× lower RMSE on individual sub-datasets, positioning the proposed framework as complementary for scenarios prioritizing interpretability, computational efficiency, and conservative predictions over raw accuracy, with the Unscented Kalman Filter attaining a 54.9% reduction in the PHM08 asymmetric Score metric (from 1,224,299 to 552,572) and improving classification accuracy from 70.72% to 73.27%. The analysis revealed distinct bias characteristics: the Extended Kalman Filter provided nearly unbiased predictions with a mean error of only 1.7 cycles, while the UKF exhibited a conservative negative bias suitable for safety-critical applications where late predictions pose greater risks.
The proposed LDA–Kalman framework offers several practical advantages for industrial deployment:
  • The linear LDA projection, combined with Kalman filter covariance matrices, provides full interpretability and natural uncertainty quantification for risk-informed maintenance decisions.
  • The unified treatment of classification and regression ensures consistency between discrete health-state assessment and continuous RUL prediction within a single architecture.
  • The framework operates effectively with limited failure instances and maintains consistent performance across combined datasets with varying operational conditions and fault modes, without requiring dataset-specific hyperparameter tuning.
Future research directions include the development of adaptive noise covariance estimation techniques capable of handling non-stationary degradation dynamics, the incorporation of physics-based degradation knowledge into state transition equations to enhance extrapolation capabilities, and comprehensive validation using real-world operational flight data.

Limitations and Practical Deployment Considerations

While the proposed framework demonstrates promising results on the C-MAPSS benchmark, several limitations should be considered for real-world deployment.
  • Data Requirements: The framework requires complete run-to-failure trajectories for training. In operational contexts, preventive maintenance typically intervenes before failure, resulting in right-censored data that would require modified training procedures or transfer learning approaches.
  • Stationarity Assumptions: The current implementation assumes consistent degradation dynamics across operating conditions. Real engines experience variable flight profiles, environmental variations, and maintenance interventions that may shift the learned LDA boundaries over time.
  • Sensor Challenges: Practical deployment introduces sensor drift, missing data from communication dropouts, and irregular sampling rates—factors not present in the C-MAPSS simulation that may degrade prediction accuracy.
  • Fleet Variability: Manufacturing tolerances and differing operational histories create unit-to-unit variability that may exceed the within-class scatter captured during training, potentially increasing false alarm rates or missed detections.
  • Accuracy Trade-offs: As shown in Table 7, deep learning methods achieve 2–3× lower RMSE on individual sub-datasets. The proposed framework is most appropriate for applications prioritizing interpretability, conservative predictions, and embedded deployment rather than minimizing point prediction error.
  • The interpretability claims of this framework refer to geometric interpretability in the two-dimensional LDA space, where degradation trajectories can be directly visualized and decision boundaries are explicitly defined, rather than causal explanation of physical component failures. Spatial fault localization in LDA space enables systematic analysis of prediction patterns facilitating algorithm debugging and refinement that is not possible with high-dimensional deep learning embeddings.

Author Contributions

Conceptualization, U.Y. and H.A.; methodology, U.Y. and H.A.; software, U.Y.; validation, H.A. and U.Y.; formal analysis, U.Y.; investigation, U.Y. and H.A.; resources, U.Y.; data curation, U.Y.; writing—original draft preparation, U.Y. and H.A.; writing—review and editing, U.Y. and H.A.; visualization, U.Y.; supervision, H.A.; project administration, U.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in [CMAPSS Jet Engine Simulated Data] https://data.nasa.gov/dataset/cmapss-jet-engine-simulated-data (accessed on 10 February 2026).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAutoregressive
BILSTM-AEBidirectional Long Short-Term Memory Autoencoder
C-MAPSSCommercial Modular Aero-Propulsion System Simulation
CNNConvolutional Neural Network
CNN-BGRU-SACNN with Bidirectional Gated Recurrent Unit and Self-Attention
DLDeep Learning
EKFExtended Kalman Filter
HIHealth Index
IATAInternational Air Transport Association
KFKalman Filter
LDALinear Discriminant Analysis
LKFLinear Kalman Filter
LSTMLong Short-Term Memory
MAEMean Absolute Error
MLPMultilayer Perceptron
NASANational Aeronautics and Space Administration
PdMPredictive Maintenance
PHMPrognostics and Health Management
RMSERoot Mean Square Error
RMSEnNormalized Root Mean Square Error
RULRemaining Useful Life
RVRRelevance Vector Regression
SVRSupport Vector Regression
TCNNTemporal Convolutional Neural Network
TCRSCANetTemporal Convolutional Residual Self-Calibrated Attention Network
TTSNetTransformer Time-Series Network
UKFUnscented Kalman Filter

References

  1. International Air Transport Association (IATA). Global Air Passenger Demand Reaches Record High in 2024; IATA Press Release: Geneva, Switzerland, 2025; Available online: https://www.iata.org/en/pressroom/2025-releases/2025-01-30-01/ (accessed on 3 January 2026).
  2. Saxena, A.; Goebel, K. PHM08 Challenge Data Set. In NASA Ames Prognostics Data Repository; NASA Ames Research Center: Moffett Field, CA, USA, 2008. Available online: http://ti.arc.nasa.gov/project/prognostic-data-repository (accessed on 15 January 2026).
  3. Chao, M.A.; Kulkarni, C.; Goebel, K.; Fink, O. Aircraft Engine Run-to-Failure Dataset under Real Flight Conditions for Prognostics and Diagnostics. Data 2021, 6, 5. [Google Scholar]
  4. Mobley, R.K. An Introduction to Predictive Maintenance, 2nd ed.; Butterworth-Heinemann: Oxford, UK, 2002. [Google Scholar]
  5. Ran, Y.; Zhou, X.; Lin, P.; Wen, Y.; Deng, R. A Survey of Predictive Maintenance: Systems, Purposes and Approaches. arXiv 2019, arXiv:1912.07383. [Google Scholar]
  6. Zhang, W.; Yang, D.; Wang, H. Data-Driven Methods for Predictive Maintenance of Industrial Equipment: A Survey. IEEE Syst. J. 2019, 13, 2213–2227. [Google Scholar] [CrossRef]
  7. Carvalho, T.P.; Soares, F.A.; Vita, R.; Francisco, R.D.P.; Basto, J.P.; Alcalá, S.G. A Systematic Literature Review of Machine Learning Methods Applied to Predictive Maintenance. Comput. Ind. Eng. 2019, 137, 106024. [Google Scholar] [CrossRef]
  8. Compare, M.; Baraldi, P.; Zio, E. Challenges to IoT-Enabled Predictive Maintenance for Industry 4.0. IEEE Internet Things J. 2020, 7, 4585–4597. [Google Scholar]
  9. Lei, Y.; Li, N.; Guo, L.; Li, N.; Yan, T.; Lin, J. Machinery Health Prognostics: A Systematic Review from Data Acquisition to RUL Prediction. Mech. Syst. Signal Process. 2018, 104, 799–834. [Google Scholar] [CrossRef]
  10. Hu, Y.; Miao, X.; Si, Y.; Pan, E.; Zio, E. Prognostics and Health Management: A Review from the Perspectives of Design, Development and Decision. Reliab. Eng. Syst. Saf. 2022, 217, 108063. [Google Scholar] [CrossRef]
  11. Ferreira, C.; Gonçalves, G. Remaining Useful Life Prediction and Challenges: A Literature Review on the Use of Machine Learning Methods. J. Manuf. Syst. 2022, 63, 550–562. [Google Scholar] [CrossRef]
  12. Wang, Y.; Zhao, Y.; Addepalli, S. Remaining Useful Life Prediction Using Deep Learning Approaches: A Review. Procedia Manuf. 2020, 49, 81–88. [Google Scholar] [CrossRef]
  13. Arias Chao, M.; Kulkarni, C.; Goebel, K.; Fink, O. Fusing Physics-Based and Deep Learning Models for Prognostics. Reliab. Eng. Syst. Saf. 2021, 217, 107961. [Google Scholar]
  14. Li, Y.; Chen, Y.; Hu, Z.; Zhang, H. Remaining Useful Life Prediction of Aero-Engine Enabled by Fusing Knowledge and Deep Learning Models. Reliab. Eng. Syst. Saf. 2023, 229, 108869. [Google Scholar] [CrossRef]
  15. Ramasso, E.; Saxena, A. Performance Benchmarking and Analysis of Prognostic Methods for CMAPSS Datasets. Int. J. Progn. Health Manag. 2014, 5, 1–15. [Google Scholar] [CrossRef]
  16. Wu, F.; Wu, Q.; Tan, Y.; Xu, X. Remaining Useful Life Prediction Based on Deep Learning: A Survey. Sensors 2024, 24, 3454. [Google Scholar] [CrossRef] [PubMed]
  17. Li, X.; Ding, Q.; Sun, J.Q. Remaining Useful Life Estimation in Prognostics Using Deep Convolution Neural Networks. Reliab. Eng. Syst. Saf. 2018, 172, 1–11. [Google Scholar] [CrossRef]
  18. Zheng, S.; Ristovski, K.; Farahat, A.; Gupta, C. Long Short-Term Memory Network for Remaining Useful Life Estimation. In Proceedings of the 2017 IEEE International Conference on Prognostics and Health Management (ICPHM), Dallas, TX, USA, 19–21 June 2017; pp. 88–95. [Google Scholar]
  19. Wu, Y.; Yuan, M.; Dong, S.; Lin, L.; Liu, Y. Remaining Useful Life Estimation of Engineered Systems Using Vanilla LSTM Neural Networks. Neurocomputing 2018, 275, 167–179. [Google Scholar] [CrossRef]
  20. Sateesh Babu, G.; Zhao, P.; Li, X.L. Deep Convolutional Neural Network Based Regression Approach for Estimation of Remaining Useful Life. In Database Systems for Advanced Applications; Springer: Cham, Switzerland, 2016; pp. 214–228. [Google Scholar]
  21. Liu, L.; Song, X.; Zhou, Z. Aircraft Engine Remaining Useful Life Estimation via a Double Attention-Based Data-Driven Architecture. Reliab. Eng. Syst. Saf. 2022, 221, 108330. [Google Scholar] [CrossRef]
  22. Jiang, Y.; Lyu, Y.; Wang, Y.; Wan, P. Fusion Network Combined with Bidirectional LSTM Network and Multiscale CNN for Remaining Useful Life Estimation. In Proceedings of the 2020 12th International Conference on Advanced Computational Intelligence (ICACI), Dali, China, 14–16 August 2020; pp. 620–627. [Google Scholar]
  23. Zhang, J.; Jiang, Y.; Wu, S.; Li, X.; Luo, H.; Yin, S. Prediction of Remaining Useful Life Based on Bidirectional Gated Recurrent Unit with Temporal Self-Attention Mechanism. Reliab. Eng. Syst. Saf. 2022, 221, 108297. [Google Scholar] [CrossRef]
  24. Xu, Z.; Zhang, Y.; Miao, J.; Miao, Q. Global Attention Mechanism Based Deep Learning for Remaining Useful Life Prediction of Aero-Engine. Measurement 2023, 217, 113098. [Google Scholar] [CrossRef]
  25. Zhang, X.; Sun, J.; Wang, J.; Jin, Y.; Wang, L.; Liu, Z. PAOLTransformer: Pruning-Adaptive Optimal Lightweight Transformer Model for Aero-Engine Remaining Useful Life Prediction. Reliab. Eng. Syst. Saf. 2023, 240, 109567. [Google Scholar] [CrossRef]
  26. Mo, Y.; Wu, Q.; Li, X.; Huang, B. Remaining Useful Life Estimation via Transformer Encoder Enhanced by a Gated Convolutional Unit. J. Intell. Manuf. 2023, 34, 1997–2006. [Google Scholar] [CrossRef]
  27. Wang, Y.; Su, C.; Wang, P.; Zhen, J.; Wang, D. A Hybrid Large-Kernel CNN and Markov Feature Framework for Remaining Useful Life Prediction. Machines 2026, 14, 57. [Google Scholar] [CrossRef]
  28. Rath, S.; Saha, D.; Chatterjee, S.; Chakraborty, A.K. Remaining Useful Life Prediction of Turbofan Engine in Varied Operational Conditions Considering Change Point: A Novel Deep Learning Approach with Optimum Features. Mathematics 2025, 13, 130. [Google Scholar] [CrossRef]
  29. Tsallis, C.; Papageorgas, P.; Piromalis, D.; Munteanu, R.A. Application-Wise Review of Machine Learning-Based Predictive Maintenance: Trends, Challenges, and Future Directions. Appl. Sci. 2025, 15, 4898. [Google Scholar] [CrossRef]
  30. Szrama, S.; Kłosowski, G. Unsupervised Classification and Remaining Useful Life Prediction for Turbofan Engines Using Autoencoders and Gaussian Mixture Models: A Comprehensive Framework for Predictive Maintenance. Appl. Sci. 2025, 15, 7884. [Google Scholar] [CrossRef]
  31. Li, Z.; Wang, H.; Zhang, X.; Chen, Y. Remaining Useful Life Prediction for Aero-Engines Based on Multi-Scale Dilated Fusion Attention Model. Appl. Sci. 2025, 15, 9813. [Google Scholar] [CrossRef]
  32. Yang, X.; Chen, L.; Wang, M.; Liu, Y.; Zhang, H. A Deep-Learning Method for Remaining Useful Life Prediction of Power Machinery via Dual-Attention Mechanism. Sensors 2025, 25, 497. [Google Scholar] [CrossRef]
  33. Costa, N.; Sánchez, L. Variational Encoding Approach for Interpretable Assessment of Remaining Useful Life Estimation. Reliab. Eng. Syst. Saf. 2022, 222, 108353. [Google Scholar] [CrossRef]
  34. Ellefsen, A.L.; Bjørlykhaug, E.; Æsøy, V.; Ushakov, S.; Zhang, H. Remaining Useful Life Predictions for Turbofan Engine Degradation Using Semi-Supervised Deep Architecture. Reliab. Eng. Syst. Saf. 2019, 183, 240–251. [Google Scholar] [CrossRef]
  35. Mansourvar, Z.; Rezaee, M.J.; Eshkevari, M. An Explainable Approach for Prediction of Remaining Useful Life in Turbofan Condition Monitoring. Neural Comput. Appl. 2025, 37, 10621–10645. [Google Scholar] [CrossRef]
  36. Dereci, U.; Tuzkaya, G. An Explainable Artificial Intelligence Model for Predictive Maintenance and Spare Parts Optimization. Supply Chain Anal. 2024, 8, 100078. [Google Scholar] [CrossRef]
  37. Li, Y.; Chen, Y.; Shao, H.; Zhang, H. A Novel Dual Attention Mechanism Combined with Knowledge for Remaining Useful Life Prediction Based on Gated Recurrent Units. Reliab. Eng. Syst. Saf. 2023, 239, 109476. [Google Scholar] [CrossRef]
  38. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  39. Julier, S.J.; Uhlmann, J.K. New Extension of the Kalman Filter to Nonlinear Systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 21–24 April 1997; SPIE: Bellingham, WA, USA, 1997; Volume 3068, pp. 182–193. [Google Scholar]
  40. Wan, E.A.; Van Der Merwe, R. The Unscented Kalman Filter for Nonlinear Estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium, Lake Louise, AB, Canada, 1–4 October 2000; pp. 153–158. [Google Scholar]
  41. Wang, J.; Wen, C. Real-Time Updating High-Order Extended Kalman Filtering Method Based on Fixed-Step Life Prediction for Vehicle Lithium-Ion Batteries. Sensors 2022, 22, 2574. [Google Scholar] [CrossRef] [PubMed]
  42. Duan, B.; Zhang, Q.; Geng, F.; Zhang, C. Remaining Useful Life Prediction of Lithium-Ion Battery Based on Extended Kalman Particle Filter. Int. J. Energy Res. 2020, 44, 1724–1734. [Google Scholar] [CrossRef]
  43. Nunes, T.S.N.; Moura, J.J.P.; Prado, O.G.; Camboim, M.M.; de Fatima, N.; Rosolem, M.; Beck, R.F.; Ding, H. An online unscented Kalman filter remaining useful life prediction method applied to second-life lithium-ion batteries. Electr. Eng. 2023, 105, 3481–3492. [Google Scholar] [CrossRef]
  44. Li, G.; Wei, J.; He, J.; Yang, H.; Meng, F. Implicit Kalman Filtering Method for Remaining Useful Life Prediction of Rolling Bearing with Adaptive Detection of Degradation Stage Transition Point. Reliab. Eng. Syst. Saf. 2023, 235, 109269. [Google Scholar] [CrossRef]
  45. Si, X.S.; Wang, W.; Hu, C.H.; Zhou, D.H. Remaining Useful Life Estimation–A Review on the Statistical Data Driven Approaches. Eur. J. Oper. Res. 2022, 213, 1–14. [Google Scholar] [CrossRef]
  46. Shang, X.; Li, J.; Lou, T.; Wang, Z.; Pang, X.; Zhang, Z. Adaptive Remaining Useful Life Estimation of Rolling Bearings Using an Incremental Unscented Kalman Filter with Nonlinear Degradation Tracking. Machines 2025, 13, 1058. [Google Scholar] [CrossRef]
  47. Baptista, M.; Sankararaman, S.; de Medeiros, I.P.; Nascimento, C., Jr.; Prendinger, H.; Henriques, E.M. Remaining Useful Life Estimation in Aeronautics: Combining Data-Driven and Kalman Filtering. Reliab. Eng. Syst. Saf. 2018, 184, 228–239. [Google Scholar] [CrossRef]
  48. Elattar, H.M.; Elminir, H.K.; Riad, A.M. Prognostics: A Literature Review. Complex Intell. Syst. 2016, 2, 125–154. [Google Scholar] [CrossRef]
  49. Zhao, Z.; Wang, S.; Liu, Y.; Wang, X. Linear Discriminant Analysis. Nat. Rev. Methods Primers 2024, 4, 70. [Google Scholar] [CrossRef]
  50. Fisher, R.A. The Use of Multiple Measurements in Taxonomic Problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  51. Maliuk, A.S.; Ahmad, Z.; Kim, J.M. A Technique for Bearing Fault Diagnosis Using Novel Wavelet Packet Transform-Based Signal Representation and Informative Factor LDA. Machines 2023, 11, 1080. [Google Scholar] [CrossRef]
  52. Yan, T.; Wang, D.; Xia, T.; Liu, J.; Peng, Z.; Xi, L. Investigation on Optimal Discriminant Directions of Linear Discriminant Analysis for Locating Informative Frequency Bands for Machine Health Monitoring. Mech. Syst. Signal Process. 2022, 178, 109288. [Google Scholar] [CrossRef]
  53. Yang, A.; Wang, Y.Z.; Chow, T.W.S. An Enhanced Trace Ratio Linear Discriminant Analysis for Fault Diagnosis: An Illustrated Example Using HDD Data. IEEE Trans. Instrum. Meas. 2019, 68, 3826–3838. [Google Scholar] [CrossRef]
  54. Yu, W.; Zhao, C. Sparse Exponential Discriminant Analysis and Its Application to Fault Diagnosis. IEEE Trans. Ind. Electron. 2018, 65, 5931–5940. [Google Scholar] [CrossRef]
  55. Li, H.; Jia, M.; Mao, Z. Dynamic Feature Extraction-Based Quadratic Discriminant Analysis for Industrial Process Fault Classification and Diagnosis. Entropy 2023, 25, 1664. [Google Scholar] [CrossRef]
  56. Heimes, F.O. Recurrent Neural Networks for Remaining Useful Life Estimation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–6. [Google Scholar]
  57. Peng, Y.; Wang, H.; Wang, J.; Liu, D.; Peng, X. A Modified Echo State Network Based Remaining Useful Life Estimation Approach. In Proceedings of the 2019 IEEE International Conference on Prognostics and Health Management (ICPHM), San Francisco, CA, USA, 17–20 June 2019; pp. 1–6. [Google Scholar]
  58. Yıldırım, U.; Afşer, H. Linear Methods for Predictive Maintenance: The Case of NASA C-MAPSS Datasets. Appl. Sci. 2025, 15, 9945. [Google Scholar] [CrossRef]
  59. Maulana, F.; Starr, A.; Ompusunggu, A.P. Explainable Data-Driven Method Combined with Bayesian Filtering for Remaining Useful Lifetime Prediction of Aircraft Engines Using NASA CMAPSS Datasets. Machines 2023, 11, 163. [Google Scholar] [CrossRef]
  60. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection. J. Open Source Softw. 2018, 3, 861. [Google Scholar] [CrossRef]
  61. Smola, A.J.; Schölkopf, B. A Tutorial on Support Vector Regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  62. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  63. Sun, J.; Zheng, L.; Huang, Y.; Ge, Y. Remaining Useful Life Prediction Based on CNN-BGRU-SA. J. Phys. Conf. Ser. 2022, 2405, 012007. [Google Scholar] [CrossRef]
  64. Li, Z.; Luo, S.; Liu, H.; Tang, C.; Miao, J. TTSNet: Transformer–Temporal Convolutional Network–Self-Attention with Feature Fusion for Prediction of Remaining Useful Life of Aircraft Engines. Sensors 2025, 25, 432. [Google Scholar] [CrossRef]
  65. Wang, H.-K.; Cheng, Y.; Song, K. Remaining Useful Life Estimation of Aircraft Engines Using a Joint Deep Learning Model Based on TCNN and Transformer. Comput. Intell. Neurosci. 2021, 2021, 5185938. [Google Scholar] [CrossRef]
  66. Chen, C.; Li, M.; Shi, J.; Yue, D.; Shi, G.; Feng, L.; Churakova, A.A.; Alexandrov, I.V. Efficient Channel Attention-Gated Graph Transformer for Aero-Engine Remaining Useful Life Prediction. ACS Omega 2025, 10, 44253–44267. [Google Scholar] [CrossRef]
  67. Wahid, A.; Breslin, J.G.; Intizar, M.A. TCRSCANet: Harnessing Temporal Convolutions and Recurrent Skip Component for Enhanced RUL Estimation in Mechanical Systems. Hum.-Cent. Intell. Syst. 2024, 4, 1–24. [Google Scholar] [CrossRef]
  68. Deng, S.; Zhou, J. Prediction of Remaining Useful Life of Aero-engines Based on CNN-LSTM-Attention. Int. J. Comput. Intell. Syst. 2024, 17, 639. [Google Scholar] [CrossRef]
Figure 1. LDA projection with decision boundaries for 707 test engines. Points are colored by true health state: red (Critical), orange (Warning), and green (Nominal). Ellipses represent 95% confidence regions for each class distribution. Solid lines indicate the Warning/Nominal boundary, dashed lines indicate the Critical/Warning boundary, and class centroids are marked with × symbols.
Figure 1. LDA projection with decision boundaries for 707 test engines. Points are colored by true health state: red (Critical), orange (Warning), and green (Nominal). Ellipses represent 95% confidence regions for each class distribution. Solid lines indicate the Warning/Nominal boundary, dashed lines indicate the Critical/Warning boundary, and class centroids are marked with × symbols.
Machines 14 00226 g001
Figure 2. Proposed LDA–Kalman framework for turbofan engine prognostics.
Figure 2. Proposed LDA–Kalman framework for turbofan engine prognostics.
Machines 14 00226 g002
Figure 3. Health index sensitivity analysis: (left) relative contribution of individual sensors to the health index, (right) comparison of LDA component importance based on variance contribution and correlation with RUL.
Figure 3. Health index sensitivity analysis: (left) relative contribution of individual sensors to the health index, (right) comparison of LDA component importance based on variance contribution and correlation with RUL.
Machines 14 00226 g003
Figure 4. Training data analysis. Four-panel visualization: (top-left) training data in LDA space with class labels—Critical (blue), Warning (orange), Healthy (yellow); (top-right) LDA space colored by normalized RUL; (bottom-left) normalized RUL distribution histogram with class boundaries; (bottom-right) class distribution bar chart.
Figure 4. Training data analysis. Four-panel visualization: (top-left) training data in LDA space with class labels—Critical (blue), Warning (orange), Healthy (yellow); (top-right) LDA space colored by normalized RUL; (bottom-left) normalized RUL distribution histogram with class boundaries; (bottom-right) class distribution bar chart.
Machines 14 00226 g004
Figure 5. Error distribution by method. Box plots showing the prediction error distribution for AR, Linear KF, Extended KF, and Unscented KF. The red dashed line indicates zero error.
Figure 5. Error distribution by method. Box plots showing the prediction error distribution for AR, Linear KF, Extended KF, and Unscented KF. The red dashed line indicates zero error.
Machines 14 00226 g005
Figure 6. Error distribution comparison. Histograms showing error distributions with mean and standard deviation values for AR (Mean = 5.8, Std = 36.0), Linear KF (Mean = 3.5, Std = 36.4), Extended KF (Mean = 1.7, Std = 36.4), and Unscented KF (Mean = −1.2, Std = 36.7).
Figure 6. Error distribution comparison. Histograms showing error distributions with mean and standard deviation values for AR (Mean = 5.8, Std = 36.0), Linear KF (Mean = 3.5, Std = 36.4), Extended KF (Mean = 1.7, Std = 36.4), and Unscented KF (Mean = −1.2, Std = 36.7).
Machines 14 00226 g006
Figure 7. Classification confusion matrices (Normalized RUL). Confusion matrices showing classification performance for AR (Acc = 70.7%), Linear KF (Acc = 71.7%), Extended KF (Acc = 72.1%), and Unscented KF (Acc = 73.3%).
Figure 7. Classification confusion matrices (Normalized RUL). Confusion matrices showing classification performance for AR (Acc = 70.7%), Linear KF (Acc = 71.7%), Extended KF (Acc = 72.1%), and Unscented KF (Acc = 73.3%).
Machines 14 00226 g007
Figure 8. Normalized RUL prediction comparison. Scatter plots showing predicted vs. true normalized RUL for AR (RMSEn = 0.0742, Acc = 70.7%), Linear KF (RMSEn = 0.0724, Acc = 71.7%), Extended KF (RMSEn = 0.0706, Acc = 72.1%), and Unscented KF (RMSEn = 0.0689, Acc = 73.3%). Points are colored by true class: red (Critical), orange (Warning), green (Healthy). Dashed lines indicate class boundaries at normalized RUL = 0.1 and 0.3.
Figure 8. Normalized RUL prediction comparison. Scatter plots showing predicted vs. true normalized RUL for AR (RMSEn = 0.0742, Acc = 70.7%), Linear KF (RMSEn = 0.0724, Acc = 71.7%), Extended KF (RMSEn = 0.0706, Acc = 72.1%), and Unscented KF (RMSEn = 0.0689, Acc = 73.3%). Points are colored by true class: red (Critical), orange (Warning), green (Healthy). Dashed lines indicate class boundaries at normalized RUL = 0.1 and 0.3.
Machines 14 00226 g008
Figure 9. Engine 20 degradation analysis in LDA space: (a) trajectory showing progression from healthy to critical states with filtered estimates, (b) temporal evolution of primary LDA component tracking monotonic degradation.
Figure 9. Engine 20 degradation analysis in LDA space: (a) trajectory showing progression from healthy to critical states with filtered estimates, (b) temporal evolution of primary LDA component tracking monotonic degradation.
Machines 14 00226 g009
Figure 10. Health state classification over time for Engine 20.
Figure 10. Health state classification over time for Engine 20.
Machines 14 00226 g010
Figure 11. Normalized RUL prediction over Engine 20 lifetime (184 cycles). All methods track true RUL (black line), crossing Warning threshold at cycle 40 and Critical threshold at cycle 160.
Figure 11. Normalized RUL prediction over Engine 20 lifetime (184 cycles). All methods track true RUL (black line), crossing Warning threshold at cycle 40 and Critical threshold at cycle 160.
Machines 14 00226 g011
Table 1. Class distribution in training and test datasets.
Table 1. Class distribution in training and test datasets.
ClassTraining SamplesTraining (%)Test EnginesTest (%)
Critical (0–0.1)39,70424.7525035.36
Warning (0.1–0.3)77,14548.1241458.55
Healthy (0.3–1.0)43,51027.13436.09
Total160,359100707100
Table 2. Performance comparison of estimation methods for RUL prediction in LDA space (combined C-MAPSS dataset, 707 test engines). Bold indicates the ones with the highest performance.
Table 2. Performance comparison of estimation methods for RUL prediction in LDA space (combined C-MAPSS dataset, 707 test engines). Bold indicates the ones with the highest performance.
MethodRMSE (Cycles)RMSEnMAE (Cycles)ScoreAccuracy (%)
AR (Baseline)36.480.074228.981,224,29970.72
LKF36.590.072429.061,252,74771.71
EKF36.430.070629.05782,26972.14
UKF36.690.068929.05552,57273.27
Table 3. Relative improvement of Kalman filter methods over AR baseline.
Table 3. Relative improvement of Kalman filter methods over AR baseline.
MethodRMSE ChangeScore ImprovementAccuracy Improvement
LKF−0.3%−2.3%+1.0%
EKF+0.1%+36.1%+1.4%
UKF−0.6%+54.9%+2.5%
Table 4. Error distribution statistics for each method.
Table 4. Error distribution statistics for each method.
MethodMean Error (Cycles)Std. Dev. (Cycles)Bias Characteristic
AR5.836.0Positive (late predictions)
LKF3.536.4Slight positive
EKF1.736.4Near-zero (unbiased)
UKF−1.236.7Slight negative (conservative)
Table 5. Ablation Study: Effect of LDA dimensionality reduction on all datasets.
Table 5. Ablation Study: Effect of LDA dimensionality reduction on all datasets.
DatasetProjectionFilterRMSEMAEScoreAccuracy
FD001Without LDALKF31.2226.16700663.00%
EKF31.5826.51743360.00%
UKF31.8726.86778260.00%
LDALKF26.8321.68418268.00%
EKF26.8421.79409867.00%
UKF26.9921.93415666.00%
FD002Without LDALKF7468.794851.4820.08%
EKF7402.614792.6319.31%
UKF7400.394789.7719.69%
LDALKF31.8426.1214,89273.00%
EKF32.0026.3114,15674.00%
UKF32.9127.0515,23472.00%
FD003Without LDALKF56.3144.603.50  × 10 8 57.00%
EKF56.6344.923.70  × 10 8 57.00%
UKF56.9645.313.77  × 10 8 57.00%
LDALKF41.2432.871.25  × 10 6 62.00%
EKF40.7532.411.18  × 10 6 63.00%
UKF41.0932.681.21  × 10 6 62.00%
FD004Without LDALKF13,191.678020.9323.79%
EKF13,156.227965.8323.39%
UKF13,138.727943.5523.39%
LDALKF39.2131.86187,54269.00%
EKF39.2531.89191,28369.00%
UKF40.0332.51198,76468.00%
Table 6. Computational Cost Analysis.
Table 6. Computational Cost Analysis.
PhaseARLKFEKFUKF
Training Phase (160,359 states)
Total Training Time (s)1.3617
Time per State ( μ s)8.4917
Test Phase (707 engines, 104,897 cycles)
Total Time (s)36.25640.22960.17861.0764
Avg. per Engine (ms)51.28210.32480.25271.5225
Avg. per Cycle ( μ s)345.63822.18901.703110.2615
Engine 20 Case Study (184 cycles)
Processing Time (ms)55.43010.38220.29841.8398
Table 7. Performance comparison with state-of-the-art methods on C-MAPSS dataset (RMSE in cycles).
Table 7. Performance comparison with state-of-the-art methods on C-MAPSS dataset (RMSE in cycles).
MethodTypeFD001FD002FD003FD004Ref.
MLPDeep Learning37.5680.0337.3977.37[15]
SVRMachine Learning20.9642.0021.0545.35[61]
RVRBayesian23.8031.3022.3734.34[62]
LSTMDeep Learning16.1424.4916.1828.17[18]
CNNDeep Learning18.4519.82[17]
BiLSTM-AEDeep Learning12.2517.0813.3919.86[21]
CNN-BGRU-SADeep Learning13.8817.2514.8519.39[63]
TTSNetTransformer11.0213.2511.0618.26[64]
TCNN–TransformerHybrid DL12.3115.3512.3218.35[65]
EGG–TransformerTransformer13.6414.6711.6214.38[66]
TCRSCANetDeep Learning10.4311.0210.0316.23[67]
CNN-LSTM-AttHybrid DL15.9814.4513.9116.64[68]
UKF+LRKalman-based21.5022.40[59]
LDA-ARLinear30.4035.4045.8040.30[58]
Proposed LDA-LKFKalman-based26.8331.8441.2439.21
Combined: 36.59
Proposed LDA-EKFKalman-based26.8432.0040.7539.25
Combined: 36.43
Proposed LDA-UKFKalman-based26.9932.9141.0940.03
Combined: 36.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yıldırım, U.; Afșer, H. State-Space Estimation in Discriminant Subspace: A Kalman Filtering Approach for Turbofan Engine RUL Prediction. Machines 2026, 14, 226. https://doi.org/10.3390/machines14020226

AMA Style

Yıldırım U, Afșer H. State-Space Estimation in Discriminant Subspace: A Kalman Filtering Approach for Turbofan Engine RUL Prediction. Machines. 2026; 14(2):226. https://doi.org/10.3390/machines14020226

Chicago/Turabian Style

Yıldırım, Uğur, and Hüseyin Afșer. 2026. "State-Space Estimation in Discriminant Subspace: A Kalman Filtering Approach for Turbofan Engine RUL Prediction" Machines 14, no. 2: 226. https://doi.org/10.3390/machines14020226

APA Style

Yıldırım, U., & Afșer, H. (2026). State-Space Estimation in Discriminant Subspace: A Kalman Filtering Approach for Turbofan Engine RUL Prediction. Machines, 14(2), 226. https://doi.org/10.3390/machines14020226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop