Next Article in Journal
Design and Experimentation of a Roller-Type Precision Seed Metering Device for Rapeseed with Bezier Curve-Based Profiled Holes
Previous Article in Journal
Dietary White Grape Pomace Silage for Goats: Assessing the Impact of Inclusion Level on Milk Processing Attributes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Condition-Based Maintenance Decision-Making for Multi-Component Systems with Integrated Dynamic Bayesian Network and Proportional Hazards Model

1
The Fifth Institute of Electronics, Ministry of Industry and Information Technology, Guangzhou 511370, China
2
Guangdong Provincial Key Laboratory of Electronic Information Products Reliability Technology, Guangzhou 511370, China
3
Key Laboratory of CNC Equipment Reliability, Ministry of Education, Jilin University, Changchun 130022, China
4
School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130022, China
5
Chongqing Research Institute of Jilin University, Chongqing 401120, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12793; https://doi.org/10.3390/app152312793
Submission received: 5 November 2025 / Revised: 23 November 2025 / Accepted: 27 November 2025 / Published: 3 December 2025
(This article belongs to the Section Applied Industrial Technologies)

Abstract

A condition-based maintenance decision-making framework for multi-component systems is proposed in this work by integrating dynamic Bayesian network (DBN) with proportional hazards model (PHM). The framework is designed to address the challenge of handling mixed failure types and complex failure dependencies, which often lead to inaccurate maintenance decisions in existing methods. In this integrated model, the DBN captures the failure evolution and both dynamic and static dependencies among components, while the PHM enhances the capability to characterize mixed failure interactions, thereby enabling the coverage of three common types of failure dependencies in multi-component systems. The model is formulated and solved using a finite-horizon Markov decision process (MDP), with the optimal maintenance strategy obtained by maximizing the total expected reward. Numerical case studies demonstrate the framework’s flexibility in handling mixed failures and complex dependencies, showing its potential to effectively support condition-based maintenance decision-making for complex multi-component systems.

1. Introduction

In modern industrial systems, unplanned downtime of critical equipment due to failures can result in significant economic losses and serious safety incidents. As a result, developing effective condition-based maintenance (CBM) strategies that balance system risk and maintenance costs has become a central focus in reliability engineering. Accurate maintenance decisions rely heavily on the precise assessment of system failure states. However, traditional single-component failure models and maintenance policies are often inadequate for multi-component systems commonly encountered in engineering practice. Moreover, the prevalence of failure dependencies, such as structural, stochastic, and economic dependencies [1,2], makes the assumption of component independence unrealistic, frequently leading to biased state estimation and suboptimal maintenance decisions. Thus, designing accurate and efficient maintenance strategies for multi-component systems remains a key challenge.
Existing research on maintenance decision-making for multi-component systems primarily addresses two issues: (1) capturing the failure states of individual components, and (2) quantifying failure dependencies among components. Component failures are generally classified as either discrete (hard failures) or degradation-based (soft failures), and are commonly modeled using failure rate functions, Markov processes, or other stochastic models [3,4]. To represent failure interactions, commonly used approaches include copula functions, proportional hazards models (PHM), and state-space models [5]. Copula functions are particularly suitable for capturing dependencies among multiple degrading components. For example, Xu [6] applied a copula to model stochastic dependence in a K-out-of-N system with degrading components and developed an optimized CBM strategy within a Markov decision process (MDP) framework. Similarly, Li [7] used a Nested Lévy copula to represent hierarchical stochastic dependencies in a four-component system and derived a corresponding maintenance policy.
The PHM incorporates covariate effects into the baseline hazard function, allowing for the representation of diverse and dynamic failure dependencies. Xu [8] extended the PHM to include state-dependent covariates, constructing a joint failure rate model for a two-component system and applying it to optimize gearbox bearing maintenance. Zhou [9] integrated a non-homogeneous Poisson process into the PHM covariates to support maintenance decisions for systems subject to competing dependent risks—minor and major failures. Hu [10] incorporated a degradation process as a time-varying covariate in the PHM to estimate the failure rate of a hard failure system, enabling online maintenance decisions based on remaining useful life distribution.
Bayesian Networks, as probabilistic graphical models, offer a flexible way to represent failure dependencies among components. Dynamic Bayesian networks (DBNs) can further capture time-varying dependencies, making them well-suited for reliability modeling and maintenance planning of multi-component systems [11,12,13,14]. Hu et al. [15] combined DBNs with Hazard and Operability analysis for opportunistic predictive maintenance planning. Cai et al. [16] used DBNs to compare perfect repair, imperfect repair, and preventive maintenance strategies for a subsea blowout preventer. Özgür-Ünlüakın [17] incorporated maintenance action nodes into a DBN to model stochastic, structural, and economic dependencies, and designed proactive maintenance policies for a thermal power plant. Faddoul [18] and Morato [19] integrated Bayesian networks with MDPs and used dynamic programming and decision trees to optimize maintenance and inspection policies.
As multi-component systems grow in structural and dependency complexity, recent research has trended toward incorporating more components and multiple dependency types. However, multivariate copulas are limited to representing a single type of joint distribution and struggle with mixed failure modes and dynamic dependencies. While PHMs accommodate dynamic covariates, they lack flexibility in modeling multiple coexisting dependency types across many components. DBNs naturally support scalable and dynamic dependency modeling, yet existing DBN-based maintenance models often assume a single failure type per component.
To address these limitations, this paper introduces an integrated modeling framework that combines DBN with PHM for CBM of complex multi-component systems. A comparative analysis that systematically organizes representative works in multi-component system maintenance decision-making is given in Table 1. The main contributions of this study are as follows:
  • The proposed framework offers greater flexibility in capturing complex failure interactions, especially mixed-type failure dependencies.
  • It remains compatible with MDP-based maintenance optimization, enabling practical and scalable decision support.
The remainder of this paper is organized as follows: Section 2 introduces the DBN-PHM hybrid model for mixed failure modeling. Section 3 presents the MDP-based maintenance decision method built on this model framework. Section 4 presents a numerical case study demonstrating the model’s effectiveness in handling mixed failures and optimizing maintenance decisions. Lastly, Section 5 presents the conclusions and future research directions.

2. Model Description

2.1. Description of Multi-Component Systems

This study focuses on multi-component systems consisting of components with different failure types, i.e., some components experience discrete (hard) failures, while others undergo degradation (soft) failures. The failure probability or degradation process of these components may be affected by the external environment and operational loads. Moreover, due to factors such as functional dependence, load sharing, and environmental coupling, there may be underlying failure interaction relationships among components.
For components subject to discrete failures, the failure can be represented by a binary state variable, denoted as  C i ( t ) { 0,1 } , where  C i ( t ) = 0  indicates that the component i is functioning normally, and  C i ( t ) = 1  indicates that hard failure has occurred. It is commonly assumed that the failure time follows a probability distribution such as the exponential or Weibull distribution.
For components exhibiting degradation failures, the degradation state is represented by a degradation index  Y j ( t ) , which can typically be modeled using a multi-state Markov process or stochastic process, such as the Wiener process or gamma process. When  Y j ( t ) = 0 , the component j is considered to be in a new condition; when  Y j ( t )  reaches a predefined threshold, the component is regarded as having experienced a soft failure, meaning it can no longer meet functional or performance requirements. Taking the linear Wiener process as an example, the degradation process of component j is expressed as:
Y j ( t ) = λ j t + σ j B ( t ) ,
where  λ j  is the drift parameter,  σ j  is the diffusion parameter, and  B ( t )  denotes standard Brownian motion. The probability density function of  Y j ( t )  is
f Y j y = exp ( y λ j t ) 2 / 2 σ j 2 t 2 π σ j 2 t f o r   t >   0 0 f o r   t 0 ,

2.2. Representation of Component Failure Interactions

In multi-component systems, three types of failure interaction can be derived based on the combination between different component failure types:
  • Interactions between discrete component failures;
  • Interaction between degradation-related failures;
  • Mixed failure interaction between discrete and degradation components.
Bayesian network uses nodes to represent individual components and directed edges to denote the static dependency among components. As a result, it can model the first two types of failure interactions, as well as the influence of discrete component failures on degrading components. Furthermore, DBN extends Bayesian networks to model temporal processes by discretizing the system’s operating timeline into slices and assuming that state transitions between consecutive slices are homogeneous and Markovian [20]. Meanwhile, PHM is capable of capturing the effects of degrading components on discrete-component failures [10]. It has also been shown that a PHM can be equivalently represented as a Bayesian network [21]. Therefore, integrating PHM into the DBN enables the comprehensive characterization of all component failure types and their interactions, as well as the dynamic evolution of failures over time.
Without loss of generality, a PHM-DBN model is designed as shown in Figure 1 to illustrate the aforementioned properties. The system includes four components, represented by nodes C1 to C4, and a system-level node S. The failure type and state space of each node are defined in Table 2.
The DBN contains three types of directed edges, representing three different meanings, as detailed below.
  • Static failure interaction between components
Dotted edges between nodes within the same time slice represent static failure dependencies between components, which are quantified using conditional probability tables (CPTs). For example, the directed edge C1 → C3 is defined by the CPT in Table 3, where  a i j 0  and  j = 0 K 1 a i j = 1 , i = 0, 1.
As the failure of any component leads to system malfunction, the CPT of {C1, C2, C3, C4} → S is defined as follows:
S = 0 N o   c o m p o n e n t s   f a i l i n g 1 o t h e r w i s e ,
2.
Dynamic evolution of single component failure process
Dashed edges connecting the same component across consecutive time slices represent the temporal evolution of its failure process. The time interval Δt is assumed sufficiently small such that no state transition occurs within it. For the discrete component C1, the CPT of C1(t) → C1(t + Δt) is given in Table 4, where R1(·) is the reliability function of C1.
For the degrading component C2, its continuous stochastic degradation process is discretized into a multi-state Markov process [22], and then the Markov transition probability can be represented using CPT. The steps are as follows: Assume that the state space of degradation component i is divided into  { 0,1 , , M i } , with a failure threshold is set as Di. Let  d i = D i M i  denote the interval width for discretizing the degradation process. For  m i { 0,1 , , M i 1 } , if  Y i ( t ) [ m i d i , ( m i + 1 ) d i ) , the component is considered to be in state  m i , whereas if  Y i ( t ) [ D i , + ) , the component is considered to be in state  M i , i.e., the failure state, which is the absorbing state of the Markov process. After discretization, the CPT of C2(t) → C2(t + Δt) is given in Table 5, where
P ( j , k ) i = j d i ( j + 1 ) d i f Y j y ( j + 0.5 ) d i d y , f o r   j = 0 ,   1 ,   ,   M i 1 ;   k = 0 ,   1 ,   , M i 1 D i + f Y j y ( j + 0.5 ) d i d y , f o r   j = 0 ,   1 ,   ,   M i 1 ;   k = M i ,
and  f Y j ( )  is defined in Equation (2).
3.
Dynamic failure interactions between components
Solid edges in Figure 1 connect nodes C1, C2, and C4 across time slices, representing dynamic and mixed failure interactions. These are modeled using a PHM:
h 4 t , Z ( t ) = h 40 t φ ( γ , Z ( t ) ) ,
where  h 40 t  is the baseline hazard rate of C4, which depends on the failure distribution of C4 φ ( γ , Z ( t ) )  is the link function capturing the effect of covariates on the failure process of C4, γ is the coefficient and Z(t) represents the covariates based on the states of C1 and C2. Then, the conditional reliability function of C4 is:
R 4 ( t + Δ t | t , Z ( t ) ) = P ζ t + Δ t | ζ t ,   Z ( t ) = E exp t t + Δ t h 40 ( s ) φ ( γ , Z ( s ) )   d s | Z ( t ) ,
As the number of covariates and state increases, obtaining an analytical solution for Equation (6) becomes challenging. Therefore, an approximation method is adopted [23] with the following assumptions:
(1)
The component state deteriorates continuously over time; that is, the baseline hazard function is nondecreasing.
(2)
The covariate process is nondecreasing, meaning Z(t + Δt) ≥ Z(t) for any Δt > 0. This is reasonable in most practical scenarios, as covariates in this context correspond to the states of other components. This implies that both the link function and the hazard rate function are also nondecreasing.
The above assumptions align with engineering practice, leading to:
t t + Δ t h 40 ( s ) φ ( γ , Z ( s ) )   d s t t + Δ t h 40 ( s ) φ ( γ , Z ( t ) )   d s = φ ( γ , Z ( t ) ) t t + Δ t h 40 ( s )   d s ,
t t + Δ t h 40 ( s ) φ ( γ , Z ( s ) )   d s t t + Δ t h 40 ( s ) φ ( γ , Z ( t + Δ t ) )   d s = φ ( γ , Z ( t + Δ t ) ) t t + Δ t h 40 ( s )   d s ,
lim Δ t 0 φ ( γ , Z ( t + Δ t ) ) φ ( γ , Z ( t ) ) = 0 .
Given the assumption that no state transitions occur within Δt when  Δ t  is sufficiently small, Equations (7) and (8) are very close and build the upper and lower bound of Equation (6). Thus, Equation (6) can be approximated to:
R 4 ( t + Δ t | t , Z ( t ) ) = E exp t t + Δ t h 40 s φ ( γ , Z ( s ) )   d s | Z ( t ) exp φ ( γ , Z ( t ) )   t t + Δ t h 40 s d s .
Equation (10) allows the PHM to be converted into CPT. The CPT for the edge {C1(t), C2(t), C4(t)} → C4(t + Δt) is provided in Table 6.

2.3. Parameter Learning of the PHM-DBN Model

Since all the above failure models and failure interactions have been transformed into the DBN representation, the parameter learning of the PHM-DBN model essentially involves parameter learning for the DBN model. This includes two aspects: (1) Parameter learning for the component nodes’ failure models; and (2) Parameter learning for the directed edges representing failure interactions, which can be performed independently.
The parameter learning method for each component node’s failure model depends on the component’s failure type and pre-defined failure model. Table 7 summarizes the required data sources and applicable parameter learning methods. Among these methods, techniques such as Maximum Likelihood Estimation (MLE) and Bayesian estimation are standardized processes in the reliability engineering field, and detailed implementations can be found in references [24,25].
As described in Section 2.2, the three types of directed edges in the model are all converted into CPT. The first type of CPT represents the conditional probabilities of failure states between components. The second type denotes the state transition probabilities of the same node over time. The third type directly transforms the covariate-hazard rate relationship estimated by the PHM into a DBN-compatible CPT, which is the centerpiece of the PHM-DBN coupling and achieves mathematical compatibility between the two models. Thus, the CPT can be derived directly from the PHM parameter learning results without additional parameter estimation.
Table 8 lists the required data sources and available parameter learning methods for these CPTs, which follow standardized procedures in both DBN and PHM research. Since this study focuses on optimal maintenance decision-making, the detailed steps for CPT and PHM parameter learning are not elaborated further and can be referred to in [26,27,28,29,30,31].

3. MDP-Based Maintenance Decision Formulation

The DBN model in Section 2 characterizes the state space and probability transition of multi-component systems. By further incorporating elements such as maintenance actions and associated costs, a five-tuple Markov Decision Process model can be constructed, thereby enabling the optimization of maintenance policies [17].

3.1. State and Action Spaces

The overall state space of system is denoted as S = {C1, C2, …, CN}, where Ci (i = 1, 2, …, N) represents the state of component i, corresponding to node states in the DBN. If component i has a discrete failure mode, its state space is Ci ∈ {0, 1}, whereas if component i is a degradation component, the discretized state space is Ci ∈ {0, 1, …, Mi}.
Action nodes can be introduced into DBN to define the state-action transition probabilities [23]. As shown in Figure 2, each component node is associated with an action node Ai, connected via a directed edge. The collective action space of the system is A = {A1, A2, …, AN}. A single action Ai is performed on component i with Ai ∈ {0, CMi, PMi}, defined as follows:
Ai = 0: Do nothing. The state of component i remains unchanged.
Ai = CMi: Corrective maintenance. Applied when a discrete component has failed or a degrading component has reached state Mi. This action resets the age of a discrete component or the degradation state of a degrading component to 0.
Ai = PMi: Preventive maintenance. This action is only available for degrading components. It reduces the degradation state proportionally; i.e.,  Y i ( t ) = α Y i ( t ) , where α ∈ (0,1) is the reduction factor.

3.2. State-Action Transition Probabilities

When the states of all action nodes within the same time slice are specified, the state-action transition probabilities of component nodes can be deterministically defined using CPT. It is assumed that maintenance actions take effect immediately; that is, the node state at  t + Δ t  is determined based on the post-maintenance state at time t. Taking nodes C1 and C2 as examples, the state-action transition probabilities are defined in Table 9, Table 10 and Table 11.
Where  α m ( m = 1,2 , , M 2 )  quantifies the effect of preventive maintenance on the component state under imperfect maintenance, which is determined according to the interval in which  Y 2 ( t )  falls by Equation (4).
Similarly, the state–action transition probabilities of nodes C3 and C4 can be derived via DBN inference, given the states of their parent nodes (including action nodes and other component nodes).

3.3. Reward Setting

The cost associated with maintenance actions at each decision epoch consists of the following:
Corrective Maintenance (CM) Cost: Includes unplanned downtime cost  c d  and maintenance cost  c i c  for component undergoing CM. The total cost is  c d + i C M c i c , where CM is the set of components receiving CM.
Preventive Maintenance (PM) Cost: Includes a fixed setup cost  c s  and maintenance cost  c j p  for component undergoing PM. The total cost is  c s + j P M c j p , where PM is the set of components receiving PM.
Then the immediate cost  R ( s , a )  at each decision epoch is defined as:
R ( S , A ) = 0 i f   C M =   &   P M = c d + c s + i C M c i , C + j P M c j , P e l s e .

3.4. MDP Maintenance Optimization

The transition probabilities of the proposed MDP model are non-stationary. Therefore, the maintenance policies are optimized using a finite horizon discrete time MDP framework. Given a planning horizon T and time step Δt, the number of decision epochs H is  H = T / t + 1 . The selection of planning horizon is guided by two core engineering principles: one aligns with component reliability characteristics, meaning the time horizon must cover at least one complete “degradation-failure-maintenance” cycle of critical components to ensure optimal decisions account for long-term reliability. The other reflects industrial maintenance practices, planning horizon is typically developed for fixed short-to-medium-term cycles to balance decision flexibility and resource planning, enabling adaptation to real-time state changes while avoiding myopic decision-making.
The objective is to maximize the total expected reward over the horizon. This is achieved using backward induction in dynamic programming. The process starts from the final epoch H and proceeds backward to the first epoch. The Bellman equation for the state-value function at each decision epoch h = {1, 2, …, H} is constructed as follows:
V h ( S ) = max a A [ R ( S , A ) + γ s S P h ( S | S , A ) V h + 1 ( S ) ] ,
where  V H ( s ) = 0  for all terminal states. The discount factor  γ  (0 ≤  γ  ≤ 1) is typically used to balance the weight of immediate and future rewards,  γ  < 1 avoids infinite total reward and ensures the convergence of value iteration. However, for finite-horizon MDP, the decision-making process is bounded by a clear time endpoint, and future rewards within the horizon are equally relevant to the overall optimization objective, so  γ  can be set to 1 in this study. The optimal policy at each state s and epoch h is
π h ( S ) = arg max a A [ R ( S , A ) + γ s S P h ( S | S , A ) V h + 1 ( S ) ] .
At each decision epoch h P h ( S | S , A )  is the probability of transitioning to state  s  given action a at state s.
In traditional MDP formulations for multi-component systems, state space complexity constitutes the primary computational bottleneck. This is because the total number of system states equals the product of the state counts of individual components, which grows exponentially with N, and increases the cost of calculating state transition probabilities. However, the DBN model is designed to decompose the joint state space using conditional independence assumptions, thereby reducing computational complexity [18]. In DBN model, each component’s state depends only on its parent nodes, not all other components. Mathematically, the joint probability of the system state  S ( t ) = { C 1 ( t ) , C 2 ( t ) , , C N ( t ) }  can be decomposed as:
P ( S ) = i = 1 N P ( C i P a r e n t s ( C i ) ) ,
where  P a r e n t s ( C i ( t ) )  are the parent nodes of component i in the DBN. This decomposition reduces the number of parameters needed to describe the joint state.
The transition probability can also be decomposed using the DBN’s time-slice dependencies:
P ( S | S , A ) = i = 1 N P ( C i ( t + Δ t ) P a r e n t s ( C i ( t ) ) , A ) ,
Each component’s transition is computed independently using its CPT, eliminating the need to enumerate all possible joint state transitions. Leveraging this characteristic, the DBN structure significantly reduces the computational burden associated with state transition probability calculations. A formal complexity comparison between the proposed framework and traditional MDP is provided in Table 12.

4. Numerical Study

Here, we present a numerical case study in which the proposed model is applied to the CBM of multi-component systems. The target system for demonstration remains the DBN illustrated in Figure 1. Thanks to its flexibility, this system can represent various types of industrial assets, such as manufacturing systems, rotating machinery, power systems, and transportation equipment, and can be extended with additional nodes and attributes tailored to specific practical scenarios.

4.1. Model Definition of the Target System

The proposed model for the target system utilizes data encompassing the failure model, the state space of each component node, interdependencies between node failures, the states of action nodes, and relevant costs. The costs include expenses related to unplanned downtime, planned downtime, and performing CM or PM actions, as shown in Table 13, Table 14 and Table 15.
The methodology proceeds in two main steps. First, the DBN model of the system is established, and the states of its nodes are inferred using the BNT toolbox in MATLAB R2020a [32]. Second, the optimal maintenance policies are obtained by solving the MDP model through the backward induction algorithm.

4.2. Results Discussion and Comparison

The maintenance planning horizon is set to T = 500 with a discrete time step of Δt = 20. For different initial system states, the optimal maintenance actions are derived directly from the MDP solution and visualized as a heatmap in Figure 3. The key insights from the results are summarized as follows:
  • As the initial degradation state of component C2 increases, the optimal strategy shifts progressively from PM2 to CM2. This indicates that postponing corrective maintenance too long becomes economically unfavorable for an aged component.
  • When C2 begins in a degraded state (C2 ≥ 2), the maintenance strategy over time includes CM4 for component C4,even before C4 has failed. This is due to the failure dependence between C2 and C4, which elevates the failure risk of C4. Thus, performing joint maintenance yields greater long-term benefits than maintaining a single component.
  • When multiple components require maintenance simultaneously, joint corrective or preventive actions are generally more cost-effective than maintaining components individually. This advantage stems from the sharing of downtime and setup costs. Despite the additional maintenance expenses incurred, the overall long-term value is improved.
  • As the system ages, the maintenance strategy increasingly prioritizes short-term gains. Consequently, in the later stages of the planning horizon, actions with minimal immediate cost tend to be selected—including, in some cases, the option of performing no maintenance.
A comparative study between the independence assumption and our proposed model was conducted. Under the independence assumption, failure interactions between components are ignored, and the resulting optimal maintenance strategy derived from the MDP solution is shown in Figure 4. In contrast to the strategy in Figure 3, the independence-based approach tends to focus more on the long-term benefit of individual components, exhibits greater conservatism in joint maintenance, and only triggers maintenance when the degradation state becomes more severe. For example, C2 and C3 are initiated only when C2 > 5 and C3 = 1. Moreover, due to the neglect of failure impact, the maintenance strategy does not consider adding CM action for C4.
Additionally, a comparative analysis of system operating costs under the independence assumption and the proposed dependency-aware assumption was conducted. The system operation was simulated 500 times for each of 96 different initial states, and the average operating costs under the two assumptions were calculated, as shown in Figure 5, where the blue and red boxplots represent the average operating costs of the proposed model and the independence assumption, respectively. The results indicate that in 59 out of the 96 initial states, the proposed method reduces the average operating cost by 10%.
To comprehensively verify the effectiveness and practical superiority of the proposed framework, we conducted comparative experiments against two maintenance policies, which cover the most widely adopted maintenance paradigms in engineering practice, ensuring the comparison reflects real-world decision scenarios. All experiments were performed via 500 independent Monte Carlo simulations, with results reported as total expected cost and system availability. The baseline policies selected for comparison are defined as follows:
(1)
Corrective Maintenance (CM-only): Maintenance actions are only initiated after a component fails, with no preventive measures.
(2)
Threshold Based Proactive Maintenance via DBN [17] (ThPM-DBN): A predictive maintenance strategy that considers the system reliability; if system reliability is less than a given threshold, a proactive maintenance is scheduled.
The comparative performance of the proposed framework and baseline policies is summarized in Table 15. Compared to the best in policy Table 16, our framework still delivers at least 8.1% reduction in total expected cost, with a modest 1.8% improvement in system availability. It demonstrates that the proposed PHM-DBN-MDP framework outperforms traditional maintenance policies in both cost reduction and availability improvement. The core advantage lies in its ability to integrate mixed failure type modeling, dependency quantification, and state-aware optimization. These results confirm the framework’s practical effectiveness and competitiveness for industrial condition-based maintenance decision-making.

4.3. Sensitivity Analysis

Component-related costs including PM cost, CM cost, and system downtime cost are critical inputs for the MDP reward function. In industrial practice, these costs are often estimated with uncertainties. To evaluate the framework’s robustness to such variations, we tested three cost fluctuation levels: ±20%, ±40% relative to the baseline cost parameters in Table 17. For each level, all component costs were adjusted simultaneously to simulate systemic cost variations, and the framework’s performance was compared to the baseline. Table 16 presents the sensitivity results to component cost fluctuations.
It can be seen that the total expected cost changes proportionally with cost fluctuations, confirming that the MDP reward function correctly incorporates cost inputs without bias. Meanwhile the system availability remains nearly constant across all cost fluctuation levels, which means the framework’s ability to ensure system reliability is not compromised by cost estimation uncertainties. The results indicate that the framework’s performance is linearly responsive to component cost fluctuations but maintains stable decision logic and high robustness.

5. Conclusions

This study proposes a maintenance decision framework for multi-component systems. addressing the core challenge of modeling mixed failure types (discrete/degradation) and complex failure dependencies (static/dynamic/mixed) that limit the applicability of existing methods. By innovatively integrating the proportional hazards model into a dynamic Bayesian network and coupling the hybrid model with a finite-horizon Markov Decision Process, the framework achieves characterization of failure evolution, dependency quantification, and maintenance optimization. A systematic numerical study was conducted to validate the framework’s effectiveness, with comprehensive comparisons against baseline policies. Compared to the best result of CM and DBN-based PM policy, the proposed framework reduces total expected cost by 8.1%, while improving system availability by 1.8%, confirming the framework’s superiority in balancing cost efficiency and system reliability.
The primary innovation of this work lies in the dynamic, mathematically compatible coupling mechanism between DBN and PHM. This coupling enables three key breakthroughs: (1) comprehensive coverage of all three typical failure dependencies in multi-component systems, resolving the limitation of traditional DBNs in modeling mixed-type interactions and PHMs in handling multi-component scalability; (2) direct transformation of PHM’s covariate-hazard rate relationship into DBN’s CPT via approximation, achieving seamless integration with DBN’s probabilistic inference; and (3) a unified MDP-DBN framework that embeds maintenance actions and cost structures, enabling end-to-end condition-based decision-making.
Despite its effectiveness, the framework has three notable limitations that guide future refinement. (1) Fixed operating environment assumption: The study assumes a static operating environment, failing to account for the dynamic nature of real-world conditions (e.g., variable loads, environmental fluctuations) that can dynamically alter component degradation states and failure risks. (2) Lack of inspection integration: The current model only considers CM and PM actions, without incorporating inspection processes. This limits the ability to obtain accurate real-time component state information, hindering refined maintenance decision-making. (3) Scalability constraints for large-scale systems: As the number of components and maintenance action types increases, the state space of the MDP expands significantly. Although the conditional independence of DBN mitigates partial computational burden, the framework still faces challenges in efficiently handling large-scale systems with extensive components and complex interactions.
To enhance practical applicability and address the above limitations, future work will focus on three verifiable objectives aligned with industrial needs: (1) Extend the model to incorporate time-varying operating environments and analyze their impacts on component degradation and failure dependencies, developing adaptive maintenance strategies that respond to real-time environmental variations. (2) Introduce inspection nodes and related costs into the DBN-PHM-MDP framework, optimizing inspection frequency and timing based on component degradation states. This will improve the accuracy of state estimation and enable more targeted, cost-effective maintenance decisions. (3) Explore advanced algorithms such as factored MDP [33], approximate dynamic programming, or deep reinforcement learning to address the state space explosion problem. This will enhance the framework’s scalability and computational efficiency for large-scale multi-component systems.

Author Contributions

Conceptualization, S.L., J.G. and J.T.; methodology, S.L.; software, J.T.; validation, C.Y., P.X. and J.G.; formal analysis, S.L.; investigation, P.X.; resources, C.Y.; data curation, C.Y.; writing—original draft preparation, S.L.; writing—review and editing, P.X.; visualization, C.Y.; supervision, P.X. and G.W.; project administration, G.W.; funding acquisition, S.L., J.G. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guangdong Basic and Applied Basic Research Foundation, [Grant number 2023A1515012918]; the Opening Project of the Key Laboratory of CNC Equipment Reliability, Ministry of Education, Jilin University, [Grant number JLU-cncr-202302]; Natural Science Foundation of Chongqing Municipality, [Grant number CSTB2022NSCQ-MSX0902]; the Special Funds of CEPREI, [Grant number 24Z06].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some or all data, models, or codes generated or used during the study are available from the corresponding authors by request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CBMCondition-based maintenance
CMCorrective maintenance
CPTConditional probability table
DBNDynamic Bayesian network
MDPMarkov decision process
MLEMaximum Likelihood Estimation
PHMProportional hazards model
PMPreventive maintenance

References

  1. Olde Keizer, M.C.A.; Flapper, S.D.P.; Teunter, R.H. Condition-Based Maintenance Policies for Systems with Multiple Dependent Components: A Review. Eur. J. Oper. Res. 2017, 261, 405–420. [Google Scholar] [CrossRef]
  2. Zhao, J.; Gao, C.; Tang, T. A Review of Sustainable Maintenance Strategies for Single Component and Multicomponent Equipment. Sustainability 2022, 14, 2992. [Google Scholar] [CrossRef]
  3. Li, R.; Cai, B.; Zhao, Y.; Liu, Y.; Zhang, Y.; Kong, X.; Liu, Y. Condition-Based Maintenance Method for Multi-Component Systems under Discrete-State Condition: Subsea Production System as a Case. Ocean Eng. 2024, 306, 118166. [Google Scholar] [CrossRef]
  4. Duan, C.; Deng, T.; Song, L.; Wang, M.; Sheng, B. An Adaptive Reliability-Based Maintenance Policy for Mechanical Systems under Variable Environments. Reliab. Eng. Syst. Safe 2023, 238, 109396. [Google Scholar] [CrossRef]
  5. Meango, T.J.-M.; Ouali, M.-S. Failure Interaction Models for Multicomponent Systems: A Comparative Study. SN Appl. Sci. 2019, 1, 66. [Google Scholar] [CrossRef]
  6. Xu, J.; Liang, Z.; Li, Y.-F.; Wang, K. Generalized Condition-Based Maintenance Optimization for Multi-Component Systems Considering Stochastic Dependency and Imperfect Maintenance. Reliab. Eng. Syst. Safe 2021, 211, 107592. [Google Scholar] [CrossRef]
  7. Li, H.; Zhu, W.; Dieulle, L.; Deloux, E. Condition-Based Maintenance Strategies for Stochastically Dependent Systems Using Nested Lévy Copulas. Reliab. Eng. Syst. Safe 2022, 217, 108038. [Google Scholar] [CrossRef]
  8. Xu, M.; Jin, X.; Kamarthi, S.; Noor-E-Alam, M. A Failure-Dependency Modeling and State Discretization Approach for Condition-Based Maintenance Optimization of Multi-Component Systems. J. Manuf. Syst. 2018, 47, 141–152. [Google Scholar] [CrossRef]
  9. Zhou, H.; Li, Y. Optimal Replacement in a Proportional Hazards Model with Cumulative and Dependent Risks. Comput. Ind. Eng. 2023, 176, 108930. [Google Scholar] [CrossRef]
  10. Hu, J.; Chen, P. Predictive Maintenance of Systems Subject to Hard Failure Based on Proportional Hazards Model. Reliab. Eng. Syst. Safe 2020, 196, 106707. [Google Scholar] [CrossRef]
  11. Wang, Q.-A.; Chen, J.; Ni, Y.; Xiao, Y.; Liu, N.; Liu, S.; Feng, W. Application of Bayesian Networks in Reliability Assessment: A Systematic Literature Review. Structures 2025, 71, 108098. [Google Scholar] [CrossRef]
  12. Cai, B.; Kong, X.; Liu, Y.; Lin, J.; Yuan, X.; Xu, H.; Ji, R. Application of Bayesian Networks in Reliability Evaluation. IEEE Trans. Ind. Inf. 2019, 15, 2146–2157. [Google Scholar] [CrossRef]
  13. Luo, X.; Li, Y.; Bai, X.; Tang, R.; Jin, H. A Novel Approach Based on Fault Tree Analysis and Bayesian Network for Multi-State Reliability Analysis of Complex Equipment Systems. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2024, 238, 812–838. [Google Scholar] [CrossRef]
  14. Ait Mokhtar, E.H.; Laggoune, R.; Chateauneuf, A. Benefit and Customer Demand Approach for Maintenance Optimization of Complex Systems Using Bayesian Networks. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2017, 231, 558–572. [Google Scholar] [CrossRef]
  15. Hu, J.; Zhang, L.; Liang, W. Opportunistic Predictive Maintenance for Complex Multi-Component Systems Based on DBN-HAZOP Model. Process Saf. Environ. 2012, 90, 376–388. [Google Scholar] [CrossRef]
  16. Cai, B.; Liu, Y.; Fan, Q.; Zhang, Y.; Yu, S.; Liu, Z.; Dong, X. Performance Evaluation of Subsea BOP Control Systems Using Dynamic Bayesian Networks with Imperfect Repair and Preventive Maintenance. Eng. Appl. Artif. Intel. 2013, 26, 2661–2672. [Google Scholar] [CrossRef]
  17. Özgür-Ünlüakın, D.; Türkali, B. Evaluation of Proactive Maintenance Policies on a Stochastically Dependent Hidden Multi-Component System Using DBNs. Reliab. Eng. Syst. Safe 2021, 211, 107559. [Google Scholar] [CrossRef]
  18. Faddoul, R.; Raphael, W.; Soubra, A.-H.; Chateauneuf, A. Incorporating Bayesian Networks in Markov Decision Processes. J. Infrastruct. Syst. 2013, 19, 415–424. [Google Scholar] [CrossRef]
  19. Morato, P.G.; Papakonstantinou, K.G.; Andriotis, C.P.; Nielsen, J.S.; Rigo, P. Optimal Inspection and Maintenance Planning for Deteriorating Structural Components through Dynamic Bayesian Networks and Markov Decision Processes. Struct. Saf. 2022, 94, 102140. [Google Scholar] [CrossRef]
  20. Schenkelberg, K.; Seidenberg, U.; Ansari, F. Analyzing the Impact of Maintenance on Profitability Using Dynamic Bayesian Networks. Procedia CIRP 2020, 88, 42–47. [Google Scholar] [CrossRef]
  21. Kraisangka, J.; Druzdzel, M.J. A Bayesian Network Interpretation of the Cox’s Proportional Hazard Model. Int. J. Approx. Reason. 2018, 103, 195–211. [Google Scholar] [CrossRef] [PubMed]
  22. Zheng, R.; Chen, B.; Gu, L. Condition-Based Maintenance with Dynamic Thresholds for a System Using the Proportional Hazards Model. Reliab. Eng. Syst. Safe 2020, 204, 107123. [Google Scholar] [CrossRef]
  23. Zheng, R.; Najafi, S.; Zhang, Y. A Recursive Method for the Health Assessment of Systems Using the Proportional Hazards Model. Reliab. Eng. Syst. Safe 2022, 221, 108379. [Google Scholar] [CrossRef]
  24. Gebraeel, N.Z.; Lawley, M.A.; Li, R.; Ryan, J.K. Residual-Life Distributions from Component Degradation Signals: A Bayesian Approach. IIE Trans. 2005, 37, 543–557. [Google Scholar] [CrossRef]
  25. Kvam, P.; Lu, J.-C. Statistical Reliability with Applications. In Springer Handbook of Engineering Statistics; Pham, H., Ed.; Springer: London, UK, 2006; pp. 49–61. [Google Scholar]
  26. Zhao, S.; Makis, V.; Chen, S.; Li, Y. Health Assessment Method for Electronic Components Subject to Condition Monitoring and Hard Failure. IEEE Trans. Instrum. Meas. 2019, 68, 138–150. [Google Scholar] [CrossRef]
  27. Man, J.; Zhou, Q. Prediction of Hard Failures with Stochastic Degradation Signals Using Wiener Process and Proportional Hazards Model. Comput. Ind. Eng. 2018, 125, 480–489. [Google Scholar] [CrossRef]
  28. Rebello, S.; Yu, H.; Ma, L. An Integrated Approach for System Functional Reliability Assessment Using Dynamic Bayesian Network and Hidden Markov Model. Reliab. Eng. Syst. Safe 2018, 180, 124–135. [Google Scholar] [CrossRef]
  29. Niculescu, R.S.; Mitchell, T.M.; Rao, R.B. Bayesian Network Learning with Parameter Constraints. J. Mach. Learn. Res. 2006, 7, 1357–1383. [Google Scholar]
  30. Özgür-Ünlüakın, D.; Türkali, B.; Karacaörenli, A.; Çağlar Aksezer, S. A DBN Based Reactive Maintenance Model for a Complex System in Thermal Power Plants. Reliab. Eng. Syst. Safe 2019, 190, 106505. [Google Scholar] [CrossRef]
  31. Abeygunawardane, S.K.; Jirutitijaroen, P.; Xu, H. Adaptive Maintenance Policies for Aging Devices Using a Markov Decision Process. IEEE Trans. Power Syst. 2013, 28, 3194–3203. [Google Scholar] [CrossRef]
  32. Bayes Net Toolbox for Matlab. Available online: https://github.com/bayesnet/bnt (accessed on 29 October 2025).
  33. Guestrin, C.; Koller, D.; Parr, R.; Venkataraman, S. Efficient Solution Algorithms for Factored MDPs. J. Artif. Intell. Res. 2003, 19, 399–468. [Google Scholar] [CrossRef]
Figure 1. Schematic of the PHM-DBN model.
Figure 1. Schematic of the PHM-DBN model.
Applsci 15 12793 g001
Figure 2. DBN model with action nodes.
Figure 2. DBN model with action nodes.
Applsci 15 12793 g002
Figure 3. Optimal actions derived from the MDP solution.
Figure 3. Optimal actions derived from the MDP solution.
Applsci 15 12793 g003
Figure 4. Optimal maintenance actions under the independence assumption.
Figure 4. Optimal maintenance actions under the independence assumption.
Applsci 15 12793 g004
Figure 5. Average simulated cost: Proposed model vs. independence assumption.
Figure 5. Average simulated cost: Proposed model vs. independence assumption.
Applsci 15 12793 g005
Table 1. Limitations of Existing Methods and Comparison with This Study.
Table 1. Limitations of Existing Methods and Comparison with This Study.
MethodAbility to Model Mixed Discrete and Degradation FailuresMulti-Component ScalabilityAbility to Model Multiple Failure Dependency TypesDynamic Modeling Nature
Copula (Xu 2021 [6], Li 2022 [7])No (only degradation failures)Poor (single component type)NoNo
PHM (Xu 2018 [8], Zhou 2023 [9], Hu 2020 [10])YesPoor (generally single component or no more than two components)Medium (low flexibility)Yes
BN/DBN (Hu 2012 [15], Cai 2013 [16], Özgür-Ünlüakın 2021 [17])YesMedium (no mixed component type assumed)Medium (no mixed dependency)Yes
BN/DBN with MDP (Morato 2022 [19], Faddoul 2013 [18])YesMedium (no mixed component type assumed)Medium (no mixed dependency)Yes
Our study (DBN-PHM with MDP)YesGoodGood (all dependency types covered)Yes
Table 2. Assumptions of each node.
Table 2. Assumptions of each node.
NodeNode TypeState Space
C1Discrete2 states
C2DegradationContinuous stochastic process
C3DegradationK discrete states
C4Discrete2 states
Table 3. CPT of C1 → C3.
Table 3. CPT of C1 → C3.
State of C1State of C3
01K − 1
0a00a01a0K−1
1a10a11a1K−1
Table 4. CPT of  C 1 ( t ) C 1 ( t   +   Δ t ) .
Table 4. CPT of  C 1 ( t ) C 1 ( t   +   Δ t ) .
State   of   C 1 ( t ) State   of   C 1 ( t   +   Δ t )
01
0 R 1 ( t   +   Δ t ) R 1 ( t ) 1 R 1 ( t   +   Δ t ) R 1 ( t )
101
Table 5. CPT of C2(t) → C2(t + Δt).
Table 5. CPT of C2(t) → C2(t + Δt).
State   of   C 2 ( t ) State   of   C 2 ( t   +   Δ t )
012 M 2
0 P ( 0,0 ) 2 P ( 0,1 ) 2 P ( 0,2 ) 2 P ( 0 , M 2 ) 2
1 P ( 1,0 ) 2 P ( 1,1 ) 2 P ( 1,2 ) 2 P ( 1 , M 2 ) 2
2 P ( 2,0 ) 2 P ( 2,1 ) 2 P ( 2,2 ) 2 P ( 2 , M 2 ) 2
M 2 0001
Table 6. CPT of {C1(t), C2(t), C4(t)} → C4(t + Δt).
Table 6. CPT of {C1(t), C2(t), C4(t)} → C4(t + Δt).
State   of   C 1 ( t ) , C 2 ( t ) , C 4 ( t ) State   of   C 4 ( t   +   Δ t )
01
C 4 ( t ) = 0 , Z ( t ) = C 1 ( t ) , C 2 ( t ) R 4 ( t + Δ t | t , Z ( t ) ) 1 R 4 ( t + Δ t | t , Z ( t ) )
C 4 ( t ) = 1 , Z ( t ) = C 1 ( t ) , C 2 ( t ) 01
Table 7. Parameter learning of component nodes.
Table 7. Parameter learning of component nodes.
Component NodeFailure TypeFailure ModelData SourceParameter Learning Method
C1, C4DiscreteFailure time distribution functionComponent failure time dataMaximum Likelihood estimation (MLE), Bayesian estimation
C2DegradationContinuous stochastic processComponent condition monitoring dataMLE, Bayesian estimation
C3DegradationMulti-state Markov processComponent condition monitoring dataFrequency count method, MLE, Expectation maximization, Bayesian estimation
Table 8. Parameter learning of directed edges.
Table 8. Parameter learning of directed edges.
Directed Edge TypeCPT RepresentationData SourceParameter Learning Method
Type 1Conditional probabilityState or failure data of parent nodes and child nodes (require strictly synchronous collection)MLE, Bayesian estimation, expert elicitation
Type 2State transition probabilityComponent time-series state dataMLE, Bayesian estimation, or derived from component failure model
Type 3PHMComponent failure data along with covariates’ time-series state data (require strictly synchronous collection)MLE, Bayesian estimation
Table 9. State-action transition probabilities of C1, given    A 1 ( t ) = C M 1 .
Table 9. State-action transition probabilities of C1, given    A 1 ( t ) = C M 1 .
State   of   C 1 ( t ) State   of   C 1 ( t   +   Δ t )
01
C 1 ( t ) = 0  or  C 1 ( t ) = 1   R ( Δ t ) R ( 0 ) 1 R ( Δ t ) R ( 0 )
Table 10. State-action transition probabilities of C2, given    A 2 ( t ) = C M 2 .
Table 10. State-action transition probabilities of C2, given    A 2 ( t ) = C M 2 .
State   of   C 2 ( t ) State   of   C 2 ( t   +   Δ t )
012 M 2
C 2 ( t ) = 0,1 , 2 , , M 2 P ( 0,0 ) 2 P ( 0,1 ) 2 P ( 0,2 ) 2 P ( 0 , M 2 ) 2
Table 11. State-action transition probabilities of C2, given    A 2 ( t ) = P M 2 .
Table 11. State-action transition probabilities of C2, given    A 2 ( t ) = P M 2 .
State   of   C 2 ( t ) State   of   C 2 ( t   +   Δ t )
012 M 2
0 P ( 0,0 ) 2 P ( 0,1 ) 2 P ( 0,2 ) 2 P ( 0 , M 2 ) 2
1 P ( α 1 , 0 ) 2 P ( α 1 , 1 ) 2 P ( α 1 , 2 ) 2 P ( α 1 , M 2 ) 2
2 P ( α 2 , 0 ) 2 P ( α 2 , 1 ) 2 P ( α 2 , 2 ) 2 P ( α 2 , M 2 ) 2
M 2 P ( α M 2 , 0 ) 2 P ( α M 2 , 1 ) 2 P ( α M 2 , 2 ) 2 P ( α M 2 , M 2 ) 2
Table 12. Computational complexity comparison.
Table 12. Computational complexity comparison.
MetricTraditional MDPMDP with DBN Inference
Joint State Parameters O ( i = 1 N C i ) O i = 1 N C i × p ϵ P a r e n t s ( C i ) C p
Transition Probability Calculations O i = 1 N C i 2 O p ϵ P a r e n t s ( C i t + Δ t ) C p
Table 13. Definition of component nodes.
Table 13. Definition of component nodes.
Component NodeFailure TypeModel ParametersState Space
C1Exponential distribution R ( t ) = e λ t λ = 0.02 {0, 1}
C2Gamma process f Y y = G a ( y | ν , u ) = u v y ν 1 e u y Γ ( ν ) , y 0
ν = 0.04 , u = 6 , D = 7
{0, 1, 2, …, 7}
C3Multistate Markov process P C 3 ( t + t ) | C 3 ( t ) = 0.7 0.2 0.1 0.1 0.6 0.3 0 0 1 {0, 1, 2}
C4Weibull distribution R ( t ) = e t η β 1 , η = 200 , β = 1.5 {0, 1}
Table 14. Failure interactions between relevant nodes.
Table 14. Failure interactions between relevant nodes.
Failure InteractionDefinition
C1 → C3 P C 3 ( t ) | C 1 ( t ) = 0.8 0.1 0.1 0.1 0.5 0.4
{C1, C2} → C4 h 4 t , C 1 ( t ) , C 2 ( t ) = β η t η β 1 e x p ( γ 1 C 1 ( t ) + γ 2 C 2 ( t ) )
η = 200 , β = 1.5 , γ 1 = 0.1 , γ 2 = 0.4
Table 15. Definition of action nodes and costs.
Table 15. Definition of action nodes and costs.
Action NodeAction SpaceCosts
A1{0, CM1} c 1 c = 500 c 2 c = 750 c 3 c = 500 c 4 c = 800 c 2 P = 100 c 3 P = 200 ,
c d = 5000 c s = 350
A2{0, CM2, PM2}
A3{0, CM3, PM3}
A4{0, CM4}
Table 16. Comparative Results Against Baseline Policies.
Table 16. Comparative Results Against Baseline Policies.
Maintenance PolicyTotal Expected CostSystem Availability
CM-only70,062 ± 12,6080.7707 ± 0.0066
ThPM-DBN (threshold = 0.99)66,567 ± 13,9550.9611 ± 0.0055
ThPM-DBN (threshold = 0.95)68,906 ± 13,3230.9168 ± 0.0042
ThPM-DBN (threshold = 0.9)70,201 ± 14,4110.8764 ± 0.0051
Our model61,610 ± 13,9080.9788 ± 0.0043
Table 17. Sensitivity Results to Component Cost Fluctuations.
Table 17. Sensitivity Results to Component Cost Fluctuations.
Maintenance PolicyTotal Expected CostSystem Availability
40%84,749 ± 18,2340.9791 ± 0.0046
20%73,462 ± 16,4910.9788 ± 0.0051
0% (baseline)61,610 ± 13,9080.9788 ± 0.0043
−20%50,240 ± 99860.9783 ± 0.0044
−40%35,822 ± 71610.9794 ± 0.0042
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Yao, C.; Xu, P.; Guo, J.; Wang, G.; Tang, J. Condition-Based Maintenance Decision-Making for Multi-Component Systems with Integrated Dynamic Bayesian Network and Proportional Hazards Model. Appl. Sci. 2025, 15, 12793. https://doi.org/10.3390/app152312793

AMA Style

Li S, Yao C, Xu P, Guo J, Wang G, Tang J. Condition-Based Maintenance Decision-Making for Multi-Component Systems with Integrated Dynamic Bayesian Network and Proportional Hazards Model. Applied Sciences. 2025; 15(23):12793. https://doi.org/10.3390/app152312793

Chicago/Turabian Style

Li, Shizheng, Canjiong Yao, Pengfei Xu, Jinyan Guo, Guoqing Wang, and Jing Tang. 2025. "Condition-Based Maintenance Decision-Making for Multi-Component Systems with Integrated Dynamic Bayesian Network and Proportional Hazards Model" Applied Sciences 15, no. 23: 12793. https://doi.org/10.3390/app152312793

APA Style

Li, S., Yao, C., Xu, P., Guo, J., Wang, G., & Tang, J. (2025). Condition-Based Maintenance Decision-Making for Multi-Component Systems with Integrated Dynamic Bayesian Network and Proportional Hazards Model. Applied Sciences, 15(23), 12793. https://doi.org/10.3390/app152312793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop