MOSOF with NDCI: A Cross-Subsystem Evaluation of an Aircraft for an Airline Case Scenario
Abstract
1. Introduction
2. Materials and Methods
2.1. Normalised Diagnostic Contribution Index (NDCI) vs. mRMR
- Separation Power (SP): This component quantifies the magnitude of a sensor’s response to a fault relative to its baseline noise. It is calculated as the mean absolute deviation of the sensor signal from its healthy baseline across all fault conditions, normalised by the dynamic range observed during healthy operation.
- Severity Sensitivity (S): This component measures how well a sensor’s response tracks the progression of a fault. It is computed by normalising the sensor’s absolute deviation by a factor representing the fault’s severity, rewarding sensors that show a monotonic response as degradation deepens.
- Uniqueness (U): This component promotes informational diversity within the sensor suite. To avoid selecting clusters of redundant sensors, it penalises signals that are highly correlated with others. It is computed as one minus the average absolute Pearson correlation between a given sensor and all others under fault conditions. It is important to distinguish that the NDCI “Uniqueness” component penalises mathematical redundancy (high statistical correlation) rather than physical coupling. This ensures that the algorithm selects sensors along the propagation path of a fault rather than clustering multiple sensors at the single point of highest impact.
2.2. Data and Methods
- Engine: The model chosen for the engine digital twin is the Pratt and Whitney JT9D open-source turbofan engine model provided by T-MATS software in MATLAB R2024a Simulink [18].
- Fuel System: A Simulink-based digital twin of a laboratory fuel rig, capable of simulating pump degradation, blockages, and other flow-related faults [2].
- Electrical Power System (EPS): A simulation model of the EPS, which models generator load dynamics with an Adaptive Neuro-Fuzzy Inference System for its diagnosis (ANFIS) [2].
- Environmental Control System (ECS): Proprietary models that simulate bleed air management and thermal performance of heat exchangers.
Problem Formulation
- Maximise Diagnostic Performance (): Defined as the aggregated classifier accuracy or the cumulative NDCI score of the selected suite:
- Minimise Cost (): Defined as the sum of acquisition and integration costs for the selected sensors:
- Maximise Reliability (): Defined as the harmonic Mean Time Between Failures (MTBF) of the suite, ensuring the system is not compromised by its weakest link:
2.3. Classifier Evaluation
3. Results
- 1.
- Platform symptom vectors.How each fault perturbs sensors across the subsystems is visualised by plotting normalised deviations from the healthy baseline and fault signatures are generated. These subsystem-level fault mode readings are used to generate platform-level plots, which show where a fault’s influence is concentrated and how it propagates into other subsystems. The platform level comparisons justify the subsequent design choices by revealing genuine cross-subsystem couplings, while also showing that a single platform-wide ranking is not robust for this dataset.
- 2.
- Subsystem-level, quantitatively comparable rankings and classifiers.Because a global ranking was unstable (dominated by a few high-variance channels and heterogeneous scaling), feature rankings and classifiers are evaluated per subsystem. This isolates method effects, prevents cross-subsystem leakage during training, and yields comparable, leakage-free performance estimates for engine, fuel, ECS, and EPS. The resulting candidate sensors and their measured diagnostic value are then handed to MOSOF (Section 4), which composes platform suites that mix sensors across subsystems under cost/reliability constraints.
3.1. Cross-Subsystem Synergies and Feature Ranking
- Fuel → engine. Fuel-system faults modulate flow and manifold pressures and, through combustion, alter engine temperature/enthalpy signatures. This is visible in the fuel and engine symptom-vector plots and reappears in the final suite via flow (fuel) and engine thermodynamic channels with high NDCI scores.
- Engine bleed-air → ECS thermal chain. Electrical FM5 (ACS TCV closed) drives strong excursions along the ECS heat-exchanger temperature chain and the engine bleed/mass-flow variables; the cross-subsystem effect distribution shows the largest bars on ECS thermal sensors with a concurrent spike on bleed-air mass-flow. NDCI prioritises these same regions and persists in multi-objective selection.
- Pneumatic/thermal → electrical load. Actuation and valve states couple back into electrical channels through load changes, explaining why some EPS power measurements enter the Pareto-efficient suites despite modest stand-alone separation.
3.2. Baseline vs. Nested Cross-Validation Performance
- Goal: Obtain unbiased performance estimates while comparing ranking methods fairly.
- Outer loop (testing): Data are partitioned into outer training and test folds (stratified by fault mode). The test fold remains untouched until the final evaluation in that outer iteration.
- Train-only ranking: Within each outer training set, NDCI and the baselines (e.g., mRMR) are computed using training data only. Redundant sensors are pruned on training faults.
- Inner loop (model selection): On the training faults only, an inner K-fold loop evaluates a single, fixed classifier while adding sensors stepwise in the given rank order (top-1, top-2, …). At each step, the inner-CV mean ± std accuracy is recorded. The number of sensors, k, is chosen by a one-standard-error (SE) rule from this inner loop.
- Fit and test: Using the top-k sensors for that ranking, the whole outer training set is retrained on and predicts the held-out outer test faults.
- Repeats and aggregation: The entire process is repeated with different partitions; outer-test predictions are pooled to form the confusion matrices and summary metrics (balanced accuracy for detection/isolation).
- The descriptions of the figures:
- Stepwise curves plot inner-CV mean ± std accuracy vs. the number of sensors added; this reflects the sample efficiency of each ranking.
- Tables/summary bars report outer-test performance aggregated over repeats/folds; this reflects generalisation.
- Confusion matrices aggregate outer-test predictions; this reveals which faults are confused in practice.
- Repeats/folds: 10 repeats; 5 outer folds for testing; 3 inner folds for model selection.
- Train/test split: within each outer fold, feature ranking uses training partitions only; held-out tests are never seen until the final evaluation.
- What is ranked: training-fault rows with a healthy baseline; redundant sensors pruned at |ρ| > 0.995 on training data.
- Learner: a class-weighted bagged decision-tree ensemble (identical across methods) with inverse-frequency class costs.
- Model selection: inner-CV stepwise curves determine k using a one-SE rule; only the top-k sensors are carried to the outer test.
- Metric alignment: stepwise panels show inner-CV accuracy (mean ± sd); outer-test summaries report balanced accuracy aggregated across repeats.
- Left panel—NDCI components (stacked bars).Each horizontal bar = one sensor. The bar is divided into three normalised NDCI components: SP (Separation Power), S (Severity Sensitivity), and U (Uniqueness). The total bar length is proportional to the composite NDCI (SP + S + U, i.e., higher is better).Ordering: sensors are sorted by composite NDCI, so “better” sensors appear lower in the plot. In other words, read the left panel from bottom to top: the bottom bars represent the highest NDCI sensors.
- Right panel—mRMR rank values (shorter is better).This panel shows only the rank order produced by the mRMR baseline. The bar length equals the numerical rank (1 = best, 2 = second-best, …).Important: The y-axis lists sensors alphabetically, not by rank. Consequently, rank 2 is not necessarily underneath rank 1. To interpret the panel, look at the number along the x-axis (bar length): shorter bars = better ranks.
4. Airline-Centric MOSOF Trade-Off Study
- A flat, budget floor around USD32–36k where additional spend yields little performance change;
- A moderate-slope region up to ≈0.68 performance;
- A steep, diminishing-returns region beyond ≈0.69 where sizeable cost increases buy only small gains.
- Why do points with similar performance–cost differ? The third axis reveals their reliability separation. For example, the designs clustered near 0.70 performance in Figure 15 occupy different heights in Figure 16 (corresponding to different reliability levels), which explains the wide cost band seen in the projection.
- Where the efficient “ridge” lies in 3D, the front forms a curved surface; moving toward higher performance can either lift reliability (good) or flatten/drop it (undesirable). Seeing this surface helps identify regions where small cost increases improve both performance and reliability, versus areas where one improves at the expense of the other.
- 1.
- Per-subsystem evidence (Section 3.1 and Section 3.2).Sensors are ranked within each subsystem using the NDCI values; these rankings provide the diagnostic value of candidate sensors.
- 2.
- Platform design space and objectives.A feasible catalogue of multi-subsystem SENSORS ARTRIBUTIONS is hypothetically generated under integration rules and subsystem quotas. Each suite is scored on the three primary objectives: performance (↑ NDCI-based diagnostic score aggregated over selected sensors), cost (↓ purchase + integration), and reliability (↑ suite MTBF as defined below). A derived view, benefit-to-cost (↑ a normalised combination of performance and reliability per dollar), is used only for visual triage and does not define Pareto optimality.
- Costs range from USD240–USD12.5k (medians by subsystem: engine USD1.55k, fuel USD1.16k, EPS USD0.65k, ECS USD0.65k).
- MTBFs (MTBFsuite = 1/∑i(1/MTBFi)) span 88–350 kh (medians by subsystem: EPS 177 kh, ECS 151 kh, engine 148 kh, fuel 119 kh; harmonic-mean MTBFs: EPS 184 kh, ECS 159 kh, engine 144 kh, fuel 120 kh).
- By sensor family, typical cost/MTBF ranges are:
- Flow: USD4.0–12.5k, 88–150 kh
- Torque: USD2.54–6.48k, 91–152 kh
- Pressure: USD0.88–2.38k, 105–193 kh
- Temperature: USD0.46–1.19k, 127–219 kh
- Thermodynamic property: USD0.81–1.85k, 101–187 kh
- Electrical (voltage/current/power): USD0.24–1.26k, 161–232 kh
- Computed metrics (e.g., thrust, TSFC): USD300, 350 kh
- 3.
- Pareto filtering and knee selection.Non-dominated suites form the Pareto set (Figure 16 and Figure 17). A knee is chosen at the point of highest trade-off efficiency in normalised objective space (largest local curvature/smallest distance to the utopia corner among neighbours). That single design is summarised in Table 5 and detailed in Figure 19 and Figure 20.
5. Discussion
6. Conclusions
- Diagnostic ranking: Across subsystems, the NDCI (which scores sensors by separation power, severity sensitivity, and uniqueness) consistently produces more compact and practical suites than relevance-based baselines. Under the nested protocol, NDCI outperforms mRMR on engine (balanced accuracy 0.886 vs. 0.690) and ECS (0.677 vs. 0.520), is comparable on EPS (0.518 vs. 0.510), and concedes a small margin on fuel (0.487 vs. 0.530), where limited sensor diversity constrains uniqueness. These results align with the stepwise inner CV curves, which show that NDCI reaches target accuracy with fewer sensors, indicating improved sample efficiency.
- Cross-subsystem evidence: Evaluation on PSV and severity sweeps of fault modes reveals physically plausible macro pathways; for example, fuel to engine via combustion signatures, engine bleed-air to ECS along the heat-exchanger temperature chain, and ECS to EPS via load changes were all visible on the associated PSVs. FM5’s PSV is presented in Section 3.1 for illustrative purposes; other PSVs can be obtained from the code bundle provided in the references. NDCI’s uniqueness term disperses selections along these pathways, avoiding clusters of near-duplicate measurements. This explains the higher isolation capability with compact suites.
- Stakeholder trade-offs and the Pareto front: Multi-objective optimisation over performance↑—cost↓—reliability↑ yields a feasible Pareto set with clear regimes: a low-cost, flat-performance floor; a moderate-slope region; and a diminishing-returns region near 0.69–0.71 performance. The knee solution—identified by maximum local curvature in normalised objective space—comprises 12 sensors (Engine 5, Fuel 2, EPS 2, ECS 3) delivering ≈ 0.69 performance at ≈USD36k with ≈145 kh suite MTBF. The 3D Pareto view clarifies why designs with similar performance and cost can differ materially (reliability is the key factor that separates them), supporting informed choices for airlines, OEMs, or MROs.
- Reliability treatment: Suite-level reliability is computed as a series-equivalent MTBF (reciprocal of summed failure rates), which appropriately penalises the weakest elements. Catalogue medians show only modest differences among subsystems (Engine close to ECS), so overall suite reliability is driven by the specific sensor families selected (e.g., flow/torque tend to be costlier and less reliable than temperature/electrical).
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| OEM | Original Equipment Manufacturer |
| MRO | Maintenance Repair Overhaul |
| MTBF | Mean Time Between Failures |
| VAM | Virtual Aircraft Model |
| EPS | Electrical Power System |
| ECS | Environmental Control System |
| mRMR | minimum Redundancy–maximum Relevance |
| MOSOF | Multi-Objective Sensor Optimisation Framework |
| NDCI | Normalised Diagnostic Contribution Index |
| IVHM | Integrated Vehicle Health Management |
References
- Suslu, B.; Ali, F.; Jennions, I.K. Normalised Diagnostic Contribution Index (NDCI) Integration to Multi Objective Sensor Optimisation Framework (MOSOF)—An Environmental Control System Case. Sensors 2024, 24, 2661. [Google Scholar] [CrossRef] [PubMed]
- Ezhilarasu, C.M.; Skaf, Z.; Jennions, I.K. A Generalised Methodology for an Effective Diagnosis of Aircraft Systems with Interacting Subsystems. IEEE Access 2021, 9, 59616–59633. [Google Scholar] [CrossRef]
- Vachtsevanos, G.; Lewis, F.L.; Roemer, M.; Hess, A.; Wu, B. Intelligent Fault Diagnosis and Prognosis for Engineering Systems; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Zhang, W.; Wang, J. A framework of airplane Integrated health management. In Proceedings of the 2016 IEEE Second International Conference on Multimedia Big Data (BigMM), Taipei, Taiwan, 20–22 April 2016; pp. 363–366. [Google Scholar]
- Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
- Lei, Y.; Li, N.; Guo, L.; Li, N.; Yan, T.; Lin, J. Machinery health prognostics: A systematic review from data acquisition to RUL prediction. Mech. Syst. Signal Process. 2018, 104, 799–834. [Google Scholar] [CrossRef]
- Civera, M.; Surace, C. A multi-objective genetic algorithm for robust optimal sensor placement. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 1419–1436. [Google Scholar] [CrossRef]
- Mezura-Montes, E.; Coello Coello, C.A. Multiobjective Evolutionary Algorithms in Aeronautical and Aerospace Engineering. IEEE Trans. Evol. Comput. 2014, 18, 493–514. [Google Scholar]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
- Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
- SAE International. ARP4754B: Guidelines for Development of Civil Aircraft and Systems; SAE International: Warrendale, PA, USA, 2023. [Google Scholar]
- Federal Aviation Administration. AC 25.1309-1B, System Design and Analysis; FAA: Washington, DC, USA, 1996. [Google Scholar]
- What the True Cost of Aircraft Components Reveals About Industrial Supply Chains. Available online: https://impactograph.com/what-the-true-cost-of-aircraft-components-reveals-about-industrial-supply-chains (accessed on 13 December 2025).
- Structural Health Monitoring Cost Estimation of a Piezosensorized. Available online: https://pmc.ncbi.nlm.nih.gov/articles/PMC8915022 (accessed on 13 December 2025).
- Kaufmann, M.; Zenkert, D.; Wennhage, P. Integrated cost/weight optimization of aircraft structures. Struct. Multidisc. Optim. 2010, 41, 325–334. [Google Scholar] [CrossRef]
- Optimal Sensor Selection for Health Monitoring Systems—NASA Technical Reports Server (NTRS). Available online: https://ntrs.nasa.gov/api/citations/20050237898/downloads/20050237898.pdf (accessed on 13 December 2025).
- Multi-Objective Optimization Using Genetic Algorithms: A Tutorial. Available online: https://pure.psu.edu/en/publications/multi-objective-optimization-using-genetic-algorithms-a-tutorial (accessed on 13 December 2025).
- Chapman, J.W.; Lavelle, T.M.; May, R.D.; Litt, J.S.; Guo, T. T-MATS: Toolbox for the Modeling and Analysis of Thermodynamic Systems. In Proceedings of the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Cleveland, OH, USA, 28–30 July 2014. [Google Scholar] [CrossRef]
- Andrews, D.K.; Jennions, I.K.; Cordelia, E.A. A Platform View of Aircraft Data Simulation and Analysis. Preprint 2025, 2025031945. [Google Scholar] [CrossRef]
- GitHub Code Bundle. Available online: https://github.com/ssl8/NDCI-with-MOSOF (accessed on 13 December 2025).




















| EPS | “FM1” | AC Motor Fault (FS Motor) |
| EPS | “FM2” | FS Nozzle Switch Open |
| EPS | “FM3” | FS Valve Switch Open |
| EPS | “FM4” | Engine Bleed Valve Switch Open |
| EPS | “FM5” | ECS TCV Switch Open |
| EPS | “FM6” | AC Lamp Instru Switch Open |
| EPS | “FM7” | AC Lamp Fluoro Switch Open |
| FS | “FM8” | Pump External Leakage (DPV1) |
| FS | “FM9” | Pump Internal Leakage (DPV2) |
| FS | “FM10” | FOHE Clogging (DPV3) |
| FS | “FM11” | FOHE Leakage (DPV4) |
| FS | “FM12” | Fuel Nozzle Clogging (DPV5) |
| FS | “FM13” | Reduced Pump RPM |
| ENG | “FM14” | LPT Blade Broken |
| ENG | “FM15” | LPC Fouling |
| ENG | “FM16” | HPT Blade Broken |
| ENG | “FM17” | HPC Contamination |
| ENG | “FM18” | Fan FOD |
| ENG | “FM19” | Bleed Valve Angle |
| ENG | “FM20” | CDP Leak |
| ECS | “FM21” | Primary Heat Exchanger (PHX) Fouling |
| ECS | “FM22” | PHX—Blockage of Cold Mass Flow |
| ECS | “FM23” | Secondary Heat Exchanger (SHX) Fouling |
| ECS | “FM24” | Air Cycle Machine (ACM) Mechanical Efficiency |
| ECS | “FM25” | RAM Mass Flow Blockage |
| Subsystem | Task | Best Classifier |
|---|---|---|
| Engine | Detection | Bag |
| Engine | Isolation | Bag |
| Fuel | Detection | Bag |
| Fuel | Isolation | nbKernel |
| EPS (Elec) | Detection | Bag |
| EPS (Elec) | Isolation | svmRBF |
| ECS | Detection | Bag |
| ECS | Isolation | Bag |
| Subsystem | Method | Balanced Accuracy | Sensors Used |
|---|---|---|---|
| Engine | NDCI | 0.926 | 9 |
| Engine | mRMR | 0.778 | 12 |
| Fuel | NDCI | 0.667 | 5 |
| Fuel | mRMR | 0.583 | 7 |
| EPS | NDCI | 0.731 | 8 |
| EPS | mRMR | 0.346 | 12 |
| ECS | NDCI | 0.812 | 6 |
| ECS | mRMR | 0.812 | 8 |
| Subsystem | Method | Balanced Accuracy | Sensors Used |
|---|---|---|---|
| Engine | NDCI | 0.886 | 10 |
| Engine | mRMR | 0.690 | 12 |
| Fuel | NDCI | 0.487 | 5 |
| Fuel | mRMR | 0.530 | 5 |
| EPS | NDCI | 0.518 | 7 |
| EPS | mRMR | 0.510 | 8 |
| ECS | NDCI | 0.677 | 6 |
| ECS | mRMR | 0.520 | 7 |
| Objective | Knee Value | Description |
|---|---|---|
| Diagnostic Performance | ≈0.69 | Normalised NDCI-based score of the selected suite |
| Cost | ≈USD36k | Sum of sensor purchase and integration costs |
| Reliability | ≈145 kh | Harmonic mean of sensor MTBFs |
| Sensors per Subsystem | Engine 5, Fuel 2, EPS 2, ECS 3 | Composition of the knee suite |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Suslu, B.; Ali, F.; Jennions, I.K. MOSOF with NDCI: A Cross-Subsystem Evaluation of an Aircraft for an Airline Case Scenario. Sensors 2026, 26, 160. https://doi.org/10.3390/s26010160
Suslu B, Ali F, Jennions IK. MOSOF with NDCI: A Cross-Subsystem Evaluation of an Aircraft for an Airline Case Scenario. Sensors. 2026; 26(1):160. https://doi.org/10.3390/s26010160
Chicago/Turabian StyleSuslu, Burak, Fakhre Ali, and Ian K. Jennions. 2026. "MOSOF with NDCI: A Cross-Subsystem Evaluation of an Aircraft for an Airline Case Scenario" Sensors 26, no. 1: 160. https://doi.org/10.3390/s26010160
APA StyleSuslu, B., Ali, F., & Jennions, I. K. (2026). MOSOF with NDCI: A Cross-Subsystem Evaluation of an Aircraft for an Airline Case Scenario. Sensors, 26(1), 160. https://doi.org/10.3390/s26010160

