Next Article in Journal
Research on Collision Access Method for Satellite Internet of Things Based on Bayliss Window Function
Previous Article in Journal
Implementation of a Remote Monitoring Station for Measuring UV Radiation Levels from Solarimeters Using LoRaWAN Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Evidential Reasoning Rule Considering Evidence Correlation with Maximum Information Coefficient and Application in Fault Diagnosis

1
Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, China
2
Key Laboratory of the Ministry of Education, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3111; https://doi.org/10.3390/s25103111
Submission received: 1 April 2025 / Revised: 6 May 2025 / Accepted: 13 May 2025 / Published: 14 May 2025
(This article belongs to the Special Issue Recent Trends and Advances in Intelligent Fault Diagnostics)

Abstract

:
The evidential reasoning (ER) rule has been widely adopted in engineering fault diagnosis, yet its conventional implementations inherently neglect evidence correlations due to the foundational independence assumption required for Bayesian inference. This limitation becomes particularly critical in practical scenarios where heterogeneous evidence collected from diverse sensor types exhibits significant correlations. Existing correlation processing methods fail to comprehensively address both linear and nonlinear correlations inherent in such heterogeneous evidence systems. To resolve these theoretical and practical constraints, this study develops MICER—a novel ER framework that incorporates correlation analysis based on the maximum mutual information coefficient (MIC). The proposed methodology advances ER theory by systematically integrating evidence interdependencies, thereby expanding both the theoretical boundaries of ER rules and their applicability in real-world fault diagnosis. Flange ring loosening fault diagnosis and flywheel system fault diagnosis cases are experimentally verified and the effectiveness of the method is demonstrated.

1. Introduction

Evidence theory has been significantly advanced in multi-source data integration in recent years [1,2]. As the cornerstone of the Dempster–Shafer (D-S) theory, the combination rule has been established as the fundamental mechanism in foundational evidence theory. Despite its robust evidence fusion capabilities, the emergence of counter-intuitive outcomes under conflicting evidence conditions has been widely documented [3,4]. The foundational breakthrough emerged with Yang and Singh’s [5] formulation of normalized evidence weighting, which was subsequently refined by Yang and Xu [6] through the ER Algorithm (ERA) specifically designed to resolve counter-intuitive outcomes in conflicting evidence scenarios.
Building upon the ERA framework, the ER rule [7] has been developed to extend both Dempster–Shafer evidential theory [8] and reasoning methodologies [9]. The ER rule operationalizes evidence synthesis through a generalized probabilistic reasoning framework. This dual-parameter integration systematically combines evidential weight with evidential reliability within a unified probabilistic framework. Evidential weight is determined by relative importance evaluation guided by decision-maker preferences to characterize subjective uncertainty. In parallel, evidential reliability is defined as the evidence source’s capacity for accurate measurement, serving as an objective uncertainty metric. Through orthogonal parameter treatment, reliability and weight are conjunctively embedded in belief distribution modeling. For fuzzy semantic inputs, affiliation functions are systematically applied for uncertainty-preserving information transformation. This synthesis establishes the ER rule as a comprehensive methodology for uncertainty quantification, fuzzy information processing, and probabilistic inference.
Since its introduction, the ER rule has been extensively studied due to its demonstrated scalability and adaptability. The power set domains ER (PSA_ER) framework was enhanced by Wang et al. [10] through attribute-based evidence integration, while the FoD framework was extended to PSA_ER. Mathematical formulations for dependability and evidence weight were established, along with physical interpretations of model parameters. Parameter optimization was subsequently performed using intelligent optimization techniques to enhance PSA_ER’s performance. A novel ER rule was developed by Du et al. [11] to address weight–reliability interdependencies, where evidence was recursively combined through orthogonal sum operations with formal theorems and inferences being established. Verification was conducted through numerical comparisons and case studies demonstrating the proposed methods’ effectiveness. ERr-CR was proposed by Wang et al. [12] for continuous reliability distributions, where reliability was defined as a random variable with probability distributions for paired evidence sources. Expected utility theory was introduced to characterize ERr-CR outputs. The framework was extended to multi-evidence systems, while an ER rule considering multi-source parameter uncertainties was formulated by Wang et al. [13]. A unified inference model was developed based on ER rule principles.
Meanwhile, ER has been extensively implemented in engineering domains including performance evaluation [14], pattern classification [15], and especially in the field of fault diagnosis. Based on ER rules, a new concurrent fault diagnosis model is proposed by Ning et al. [16]. Multiple sub-ER models are built, forming parallel diagnostic patterns, and thus system reliability is improved. An ER-based time–space domain cascade fusion model (TS-ER for short) is proposed by Xu et al. [17] for rudder fault diagnosis. It can obtain both time domain fusion evidence and local fusion evidence given by different spatial locations to obtain joint diagnostic evidence in both time and space domains. Based on the set-theoretic correlation measure, a regularization term is added by [18] to compute the relationship between the internal data so that rotating machinery fault diagnosis is realized. An unbalanced ensemble approach with DenseNet and evidential inference rules is proposed by Wang et al. [19] and is used for diagnosing mechanical faults in the case of class unbalance. A fault diagnosis model based on evidential reasoning (ER) rules is proposed in [20] for fault diagnosis of oil-immersed transformers. Its reference points are in the form of Gaussian distribution and optimized by a constrained genetic optimization algorithm (GA). For monitoring-based bearing fault diagnosis, a novel diagnostic ensemble approach with evidential inference rules was proposed by Wang et al. [21], which can reduce the negative impact of interrelated and redundant features as well as generate accurate and diverse base classifiers. An online monitoring and fault diagnosis method for capacitor aging based on evidential reasoning (ER) rules was introduced by Liao et al. [22]. It extracts features from DC link voltage data with different capacitor aging levels and generates data features as diagnostic evidence, which are then combined according to ER rules. Finally, the combination results are used to estimate the capacitor aging fault level. Atanasoff interval-valued intuitionistic fuzzy sets and the belief rule base are combined by Jia et al. [23] to develop an ER fault detection model for fault detection in flush airborne data sensing. A new multimodal recognition framework based on an evidential reasoning approach with interval reference values (ER-IRV) is constructed in [24] for multimodal system fault detection.
The effectiveness of ER rules has been demonstrated across diverse application scenarios through these empirical investigations. The Bayesian probabilistic foundation of ER rules requires evidence independence, which may be violated in complex systems with inherent indicator correlations. Novel interdependent reasoning frameworks have been developed, most notably the MAKER framework [25] designed for data-driven modeling under hybrid uncertainty conditions. Quantitative measurement was achieved for both evidence reliability and pairwise interdependencies. Engineering validation was conducted in [26], with the MAKER framework’s implementation process detailed in Section 2.1. A MAKER-enhanced classifier architecture is developed by He et al. [27] for feature relevance extraction and classification optimization. The ERr-DE framework is established in [28] for dependent evidence integration. Aggregation sequencing is optimized based on evidence reliability metrics. A distance correlation methodology was implemented to compute Relative Total Dependence Coefficients (RTDC). The RTDC mechanism was embedded as a discount factor within the ERr-DE framework’s probabilistic architecture. However, in the fault diagnosis of complex engineering systems, there is a temporal or spatial correlation between the collected indicators due to different sensor types. For example, because the sensor deployment distance is too close, the sensor perception will be crossed, and there will be a spatial correlation between the indicators used in fault diagnosis; when some fault diagnosis such as the influence of weather factors needs to be considered, there is a temporal correlation between the temperature and humidity indicators that are collected by the sensors at different times. Current computational methodologies fail to account for this heterogeneity, resulting in biased correlation estimates. This critical limitation is observed in [26], but nonlinear associations are not considered in the analysis of evidence relevance.
Despite significant advancements in evidential reasoning (ER) applications for fault diagnosis, critical limitations persist in addressing evidence correlations: Independence Assumption Incompatibility: State-of-the-art ER frameworks ([16,17,18,19,20,21,22,23,24]) inherently rely on the Bayesian independence axiom, disregarding intrinsic sensor network interactions (e.g., thermal–stress coupling in flange systems [16], spatiotemporal rudder fault couplings [17]). While the MAKER framework [25] partially addresses hybrid uncertainties, it retains this restrictive assumption, leading to systematic deviations in multi-sensor systems. Parametric Constraints in Correlation Modeling: Existing correlation-aware ER variants ([18,20]) impose distributional assumptions (Gaussian priors in [18], GA-optimized reference points in [20]), failing to capture nonlinear interdependencies among heterogeneous evidence (e.g., vibration spike distributions vs. chemical sensor Poisson noise). Homogeneous Evidence Bias: Current methodologies predominantly focus on single-modality fusion (e.g., voltage-based capacitor aging in [22]), neglecting metric space incompatibility between heterogeneous sensors (vibration/thermal/chemical) and quantification challenges for cross-modal evidence (interval-valued fuzzy sets [23] vs. temporal ER-IRV [24]). These gaps fundamentally restrict ER’s applicability in real-world systems where correlated, heterogeneous sensor networks are ubiquitous.
In summary, existing studies on correlation ER have not accounted for the correlation among heterogeneous evidence, while the constructed frameworks heavily rely on distributional assumptions. Consequently, identifying a correlation measure capable of addressing dependencies between heterogeneous evidence without requiring data to conform to specific distributions remains a critical challenge in current correlation ER research. The MIC has been recognized as a robust measure for variable relationship analysis. Its effectiveness has been demonstrated in big data environments for multi-correlation identification. MIC’s functional independence has been established, capable of detecting both linear/nonlinear correlations and complex relational patterns [29]. A MIC-based feature selection framework is introduced in [30] for IoT data processing optimization. MIC quantification was implemented to evaluate feature–class correlations and redundancy levels. Nonlinear association discovery is enabled by Liu et al. [31] through MIC-based rule mining. An enhanced MIC estimation algorithm (Back MIC) was developed by Cao et al., where algorithmic optimization is achieved through equipartition axis backtracking mechanisms [32]. A MIC-enhanced random forest algorithm was formulated in [33] to overcome computational inefficiency, feature redundancy, and expressive feature selection limitations. Parallel implementation was realized through Spark platform integration.
The current limitations are mainly in two directions: (1) Conventional ER fails to address interdependencies between correlated evidence; (2) existing ER studies that partially consider evidence correlations exhibit critical shortcomings, including heavy reliance on predefined data distribution assumptions and neglect of heterogeneous evidence interactions. This study aims to resolve these challenges by (1) developing a generalized ER framework capable of reasoning with correlated evidence and (2) introducing a data-agnostic correlation quantification mechanism that eliminates distributional assumptions while explicitly addressing heterogeneous evidence correlations. Therefore, the MICER rule framework is proposed, integrating MIC with evidential reasoning principles to enable hybrid evidence correlation analysis and inference process optimization. Three principal innovations distinguish this work from the correlation ER methodology documented in [26]:
(1)
Heterogeneous evidence correlations are systematically addressed, encompassing both linear and nonlinear relationships;
(2)
Evidence derivation is formulated through joint probabilistic modeling, with MIC being employed for correlation quantification;
(3)
The evidence inference rule is improved from the point of view of probabilistic reasoning, the maximum mutual information coefficient evidence inference rule is proposed, and its general and recursive forms of inference procedure are given. The method is also applied to fault diagnosis.
The remainder of this paper is organized as follows. The problems with evidence-related ER rules are described in Section 2. ER rules with maximum mutual information coefficients are proposed in Section 3. A case study is carried out in Section 4 to demonstrate how the proposed ER rule is implemented and to confirm its efficacy in engineering practice. In Section 5, this paper comes to an end.

2. Problem Statement

2.1. Related Work

This section briefly explains the non-independent ER rule framework (Maximum Likelihood Evidential Reasoning, MLER Rule) in [26]. The specific reasoning process of this framework is as follows:
The first step is to obtain evidence from the data. The literature was obtained through likelihood analysis in the following steps:
Step 1: Create a frequency record for every input into the system:
F H , i , j = T H , j , i + T H , j , i + 1 + T i , j + 2 s , i = 1 T H , i 1 , j + T H , i , j + T i , j + 2 s , i = 2 ,
where F H , i , j is the frequency that x i is in the state of j given state H , and T H , j , i represents the total observations of sample data corresponding to state H when system input x 1 is in the state of i and x 2 is in the state of j . s fulfills the subsequent requirement:
s = 0 , H = H 1 1 , H = H 2 2 , H = Θ ,
where H is a subset of the frame of discernment (FoD) Θ .
Step 2: Each system input’s likelihood record is created:
c H , i , j = p ( ( x i = x i , j ) H ) = F H , i , j t = 1 2 F H , i , t ,
where x i = x i , j means that x i is in state j . c H , i , j are the possibilities of x i = x i , j given state H . p ( | ) denotes the conditional probability function.
Step 3: Each system input’s fundamental probability record is produced:
β H , i , j = c H , i , j A Θ c A , i , j ,
where β H , i , j is the basic probability that evidence e i , j points to state H , and A is a subset of Θ .
Based on (4), the evidence can be denoted as follows:
e i , j = ( H , β H , i , j ) H Θ , H Θ β H , i , j = 1 ,
where e i , j refers to a piece of evidence acquired from the system input x i at x i = x i , j .
Then, combining the marginal likelihood function and the joint likelihood function, the calculation of the joint basic probability is given by [26].
β H , i j , m n = c H , i j , m n A Θ c A , i j , m n H Θ ,
where c H , i j , m n is the joint likelihood that both x i , j and x m , n are observed given state H. c H , i j , m n can be derived in the next two steps.
Step 1:  x i and x m create a joint frequency record:
F H , i j , m n = T H , j , n H Θ ,
where F H , i j , m n is the joint frequency that x i = x i , j and x m = x m , n are satisfied given state H .
Step 2: The combined likelihood record of x i and x m is created:
c H , i j , m n = p ( ( x i = x i , j , x m = x m , n ) H ) = F H , i j , m n u = 1 2 v = 1 2 F H , i u , m v ,
For joint likelihood inference, ref. [26] gives the computation of the interdependence index on the evidence:
α H , i j , m n = 0 , if   β H , i , j = 0   or   β H , m , n = 0 β H , i j , m n β H , i , j β H , m , n , otherwise ,
where α H , i j , m n is used to measure the interdependence of x i , j and x m , n . The basic probabilities of evidence e i , j and e m , n are denoted by β H , i , j and β H , m , n .
After gathering the evidence’s interdependence index, a new ER rule is developed to combine the interdependent evidence as probability inference. This is how it is computed:
β H = 0 , H = ϕ m ˜ H , i j , m n A Θ m ˜ A , i j , m n , H ϕ ,
m ˜ H , i j , m n = [ ( 1 r m , n ) m H , i , j + ( 1 r i , j ) m H , m , n ] + B C = H γ H , i j , m n α H , i j , m n m B , i , j m C , m , n H Θ ,
where β H is the joint probability that e i , j and e m , n jointly support H . m ˜ H , i j , m n is the non-normalized combined probability mass of e i , j and e m , n . r i , j and r m , n denote the reliabilities. γ H , i j , m n is a non-negative coefficient measuring the joint support from e i , j and e m , n . m H , i , j and m H , m , n represent the basic probability masses that e i , j and e m , n support H , respectively. B and C are also subsets of Θ .
After the combination of two pieces of evidence e i , j and e m , n , the expected utility can be calculated as follows:
u ( e ( 2 ) ) = H Θ β H u ( H ) ,
where u ( H ) is the expected utility of state H . e ( 2 ) is the combination of two pieces of evidence e i , j and e m , n .
While the article by Tang et al. does not take into account the diversity of evidence in the same complex system when calculating the relevance of evidence, this paper takes this into account and aims to establish a more generalized approach to considering evidence relevance in the reasoning of ER rules.

2.2. Evidence-Related ER Rules Problem Description

The problem description of the ER rule where evidence is not independent is as follows.
Problem 1: (Correlation Heterogeneity): Evidence diversity in complex systems introduces computational interference in correlation coefficient determination when inter-evidence correlations exist, resulting in biased diagnosis. This necessitates advanced correlation modeling capable of simultaneous linear/nonlinear relationship characterization.
Problem 2: (Uncertainty Propagation): Multi-source fusion processes are inherently affected by hybrid uncertainties in system test data. Uncertain data analysis becomes a critical research priority for probabilistic reasoning systems. While the ER framework has been widely adopted for uncertainty management, its inherent independence assumption remains problematic in practical engineering contexts. System complexity and functional interdependencies frequently violate the evidence independence requirement mandated by conventional ER implementations. Consequently, evidence correlations are systematically unaddressed in traditional ER frameworks, and the reliability of the fault diagnosis results is compromised. This work therefore focuses on reasoning process enhancement under correlated evidence conditions.
The evidence-related ER rule is structured as follows (Figure 1).
The modeling of expert knowledge R includes the following. (1) For quantitative information x i , a i , j denotes the j th reference value of x i , and a i , j + 1 a i , j , i = 1 , , N , j = 1 , , J , where N denotes the amount of quantitative information and J denotes the number of reference values. The expert establishes a mapping relationship between the numerical quantities x i , j and the reference values a i , j , i.e., x i , j   m e a n s   a i , j . Let a i , J and a i , 1 be the largest and the smallest reference values, respectively, then x i can be converted to the following belief distribution: S ( x i ) = { ( a i , j , β i , j ) , i = 1 , , N , j = 1 , , J } , where β i , j = a i , j + 1 x i , j a i , j + 1 a i , j , β i , j + 1 = 1 β i , j , a i , j x i , j a i , j + 1 , j = 1 , , J 1 β i , k = 0 , k = 1 , , J , k j , j + 1 . Assuming that the reference value a i , j is equivalent to the evaluation level A i , j of the belief distribution, the quantitative information can be uniformly expressed as S ( x i ) = { ( A i , j , β i , j ) , i = 1 , , N , j = 1 , , J } , where β i , j denotes the belief degree that the quantitative information x i is evaluated as the level A i , j . (2) For qualitative information, there is subjective judgment by experts in combination with domain knowledge to give a belief degree relative to the reference values.

2.3. Evidence-Related ER Framework

2.3.1. Correlation Coefficient Calculation

Due to the variety of data types in actual engineering, including linear data and nonlinear data, there are a lot of irrelevant or redundant data in these data, and irrelevant or redundant data often affect the evaluation results. Therefore, it is important to choose an appropriate correlation coefficient calculation method, which must be able to handle both linear and nonlinear data. The correlation coefficient calculation process is as follows:
α = c o r r ( x i , x j ) ,   i , j = 1 , 2 , n ,
where x i and x j are input, α represents the correlation coefficients, and c o r r ( · ) is the correlation coefficient calculation function.

2.3.2. Establishment of Evidence-Related ER Rule Framework

After the appropriate correlation coefficient calculation method is selected, the improved ER rule using the correlation coefficient is
y = n e w E R ( e i , e j , α , v ) ,   i , j = 1 , 2 , L ,
where n e w E R ( · ) is the new ER rule inference, y is the output, e i is i th evidence, and v is the set of parameters required by the model.
v = O ( n e w E R ( · ) , ε ) ,
where O ( · ) denotes the optimization function; ε is the optimization parameter set.
To summarize, to construct an evidence-related evidential inference rule framework, it is necessary to solve new ER rules and optimize functions, weight, reliability, and parameter sets.

3. ER Rule with Maximum Information Coefficient

In this section, the ER rule is modeled and inferred by the maximum information coefficient. In Section 3.1, the greatest mutual information coefficient is used to compute the evidence correlation coefficient. The modeling of the maximum information coefficient ER rule (MICER Rule) and the optimization process are carried out in Section 3.2.

3.1. Correlation Analysis

Mutual information (MI) is the foundation upon which the maximum information coefficient is constructed. It is capable of mining non-functional dependencies between variables in great detail in addition to measuring both linear and nonlinear relationships between variables in vast amounts of data. Due to the diversity of data variables (including linear and nonlinear, etc.) in health status evaluation in actual engineering, the characteristics of the evidence obtained from the data are consistent with it. Therefore, this paper chooses MIC to measure the degree of association between evidence. The larger the MIC value between two variables, the stronger the correlation; otherwise, the weaker the correlation.
The maximum information coefficient is mainly calculated by mutual information and grid division methods. Mutual information can be regarded as the uncertainty reduced by one random variable because another random variable is known, and it is mainly used to measure the degree of association between linear or nonlinear variables. Let S = { s i , i = 1 , 2 , , n } and T = { t i , i = 1 , 2 , , n } be random variables, where n is the number of samples, then the mutual information is defined as
M I ( S , T ) = s S t T p ( s , t ) log p ( s , t ) p ( s ) p ( t ) ,
where p ( s ) and p ( t ) are the marginal density functions; p ( s , t ) is the joint probability density function; M I ( S , T ) is the mutual information of variables s and t . The correlation between two variables is stronger the more mutual information there is between them.
Assuming that D is a finite set of ordered pairs, the definition partition G r divides the value range of the variable S into a segments and divides the value range of T into b segments, and G r is the grid of a × b . Mutual information M I ( S , T ) is calculated internally in grid division. There are many grid division methods with the same a × b , and they take the maximum value of M I ( S , T ) in different divisions as the value of mutual information of the division G r . The formula for the maximum mutual information of D under the division G r is defined as
M I ( D , a , b ) = max M I ( D | G r ) ,
where D | G r means that the data D is divided using G r . M I ( D , a , b ) denotes the maximum mutual information value of data D under a × b delimitation. M I ( D | G r ) denotes the mutual information value of data D under G r delimitation.
Even though the maximum information coefficient uses mutual information to represent the grid’s quality, it is not simply to estimate the mutual information. The maximum normalized M values obtained under different divisions form a feature matrix, and the feature matrix is defined as M a t r i x ( D ) a , b , and the calculation formula is
M a t r i x ( D ) a , b = M I ( D , a , b ) log min ( a , b ) ,
Then, the maximum information coefficient is defined as
M I C ( D ) a , b = max a * b < B ( n ) M a t r i x ( D ) a , b = M I ( D , a , b ) log min { a , b } ,
where B ( n ) is the upper limit value of grid division a × b . Generally, it is believed that the effect is best when B ( n ) = n 0.6 , so this value is also used in the experiment.
In summary, the MIC calculation is divided into three steps:
Step 1: Given a and b , the maximum mutual information value is computed by gridding the scatter diagram made up of S T with a columns and j rows;
Step 2: The greatest mutual information value is normalized;
Step 3: The MIC value is determined by taking the highest value of mutual information at various scales.
Remark 1.
Equations (13) and (16)–(19) are computed by default in this paper for ordered discrete variables.

3.2. MICER Rule

The ER rule characterizes evidence through a reliable weighted belief distribution (WBDR) to supplement the belief distribution (BD) introduced in the D-S evidence theory and is established by performing orthogonal sum operations on the weighted belief distribution (WBD) and WBDR. It is a generalized Bayesian inference process or a general joint probabilistic inference method. Simultaneously, it offers a straightforward and reliable reasoning process that can handle a range of uncertainties.
Suppose the FoD is defined as Θ = { H 1 , , H N } , where H i is the i th system state, i = 1 , , N . The power set of Θ consists of 2 N subsets, described by
T ( Θ ) = { , { H 1 } , , { H N } , { H 1 , H 2 } , , { H 1 , H N } , ( H 1 , , H N 1 ) , Θ } ,
Suppose there are L pieces of evidence { e 1 , , e L } ; the k th piece of evidence can be modeled as a basic probability distribution as follows:
e k = { ( H , β H , k ) , H Θ , k = 1 , , L } ,
where H is a subset of Θ . β H , k is the belief assigned to state H by evidence e k .
Assume that w k represents the weight of e k and that the following is the basic probability mass of e k supports H :
m H , k = w k β H , k ,
where e k is the k th piece of evidence.
Therefore, the underlying probability mass distribution of e k can be denoted as
m k = { ( H , m H , k ) H Θ ; ( P ( Θ ) , m P ( Θ ) , k ) } ,
where m P ( Θ ) , k is expressed as the remaining support determined by the weight of evidence, which means m P ( Θ ) , k = 1 w k . P ( Θ ) denotes the power set of Θ . There is H Θ m H , k + m P ( Θ ) , k = 1 because the equality constraint H Θ β H , k = 1 always holds. Therefore, (23) is a generalized probability distribution form.
We first provide the basic ER rule lemma, which follows the work of Yang and Xu [7].
Lemma 1 (basic form of ER rule).
For two correlated pieces of evidence e 1 and e 2 , assuming their reliability is given by r 1 and r 2 , respectively, their basic probability mass distribution is shown in (23). Then, the probability that e 1 and e 2 jointly support H can be calculated as follows:
β H , e ( 2 ) = 0 , H = m H , e ( 2 ) ^ A Θ m A , e ( 2 ) ^ , H ,
m H , e ( 2 ) ^ = [ ( 1 r 1 ) m H , 1 + ( 1 r 2 ) m H , 2 ] + C D = H ( 1 M I C H ( e ( 2 ) ) ) m C , 1 m F , 2 , H Θ ,
In (24), β H , e ( 2 ) is the joint probability that e 1 and e 2 jointly support H . m H , e ( 2 ) ^ denotes the unnormalized total probability mass of H from e 1 and e 2 . The symbol A is as a subset of Θ . In (25), r 1  and r 2  denote the reliabilities of e 1 and e 2 . The first term ( 1 r 1 ) m H , 1 + ( 1 r 2 ) m H , 2 is called the bounded sum of the individual supports of H by e 1 and e 2 , respectively. The basic probability mass that e 1 and e 2 support H is indicated by m H , 1 and m H , 2 , respectively. M I C H ( e ( 2 ) ) denotes the interdependence of e 1 and e 2 . In addition, C D = θ ( 1 M I C H ( e ( 2 ) ) ) m C , 1 m F , 2 is called the orthogonal sum of the common support of e 1 and e 2 , measuring the degree of all intersected supports on proposition H . Symbols C and F are also arbitrary subsets of Θ .
We provide the following lemma for the recursive ER rule to generalize:
Lemma 2 (recursive form of the ER rule).
For L pieces of relevant evidence { e 1 , , e L } , assuming their reliability is given by  { r 1 , , r L } , their basic probability mass distribution is represented like in (23). Then, by iteratively using the following equation, the likelihood that L independent pieces of evidence support H overall can be produced:
β H , e ( j ) = 0 , H = m H , e ( j ) ^ B Θ m B , e ( j ) ^ , H ,
m H , e ( j ) = 0 , H = m H , e ( j ) ^ B Θ m B , e ( j ) ^ + m P ( Θ ) , e ( j ) ^ , H ,
m H , e ( j ) ^ = [ ( 1 r j ) m H , j + m P ( Θ ) , e ( j 1 ) m H , j ] + C D = θ ( 1 M I C H ( e j 1 , e j ) ) m C , e j 1 m F , e j , H Θ ,
m P ( Θ ) , e ( j ) ^ = ( 1 r j ) m P ( Θ ) , e ( j 1 ) ,
where β H , e ( j ) represents the joint probability that the first j pieces of evidence  e j supports H , j = 2 , , L . m H , e ( j ) is the joint probability mass assignment of the first j pieces of evidence to H . m H , e ( j ) ^ and m P ( Θ ) , e ( j ) ^ represent the unnormalized combined probability mass of H and Θ in the previous j pieces of evidence. There are m H , e ( 1 ) = m H , 1 and m P ( Θ ) , e ( 1 ) = m P ( Θ ) , 1 = 1 r 1 , respectively. B , C , and D are also subsets of Θ .
The aforementioned analysis indicates that new synthetic evidence e K can be produced by combining the K pieces of evidence. Its probability distribution looks like this:
e K = { ( H , β H , e K ) H Θ } ,
where β H , e ( K ) is the joint probability that K pieces of evidence support H . The expected utility of e ( K ) is computed as follows, assuming that the utility of state H is u ( e ( K ) ) :
u ( e ( K ) ) = H Θ β H , e ( K ) u ( H )
Remark 2.
Parameter definitions and methodological considerations.
The parameters addressed in this subsection are weight and reliability of evidence and reference level. The parameters of evidence weight and reliability serve distinct roles: weights represent the subjective importance of evidence (e.g., decision-maker preferences for specific indicators), while reliability quantifies its objective credibility (e.g., sensor measurement precision or expert judgment consistency). There are many methods in setting the weight of evidence, including the subjective assignment method (e.g., expert experience method, preference rule mapping), the objective calculation method (e.g., information entropy method, data-driven method), the combination weighting method (subjective-objective combination of weights), and the dynamic adjustment mechanism (e.g., conflict feedback adjustment, utility-sensitive weights). Methods of setting reliability are based on information consistency calculations (e.g., fluctuation analysis, perturbation factor method models), modeling of statistical properties (e.g., probability density function method, interval reliability), and expert calibration methods (e.g., reliability scoring, cross-validation methods). Different weights and reliability setting methods apply to different scenarios, so it is necessary to choose suitable setting methods according to specific application scenarios when setting weights and reliability. In this paper, reference levels are set by experts in conjunction with domain knowledge.
Remark 3.
Algorithmic robustness and parameterization strategy.
The orthogonal sum operator in evidential reasoning (ER) inherently embeds a residual belief assignment mechanism that redistributes residual uncertainty to the identification framework, providing strong resistance to interference perturbations. This intrinsic robustness, empirically validated in prior studies [33,34], buffers against weight variations—experiments demonstrate ≤ 5% confidence deviation under ±20% weight perturbations. Consequently, the current methodology intentionally excludes weight optimization. Similarly, reliability calibration (reflecting objective evidence credibility) falls beyond conventional ER optimization scopes. Reference grades, derived from domain expert knowledge in this study, are retained without optimization to preserve interpretability and operational transparency. These deliberate design choices align with industrial diagnostic requirements where explainability and stability outweigh marginal accuracy gains from parametric tuning.

4. Case Study

4.1. Case of Flange Ring Loosening Fault Diagnosis

This section verifies the effectiveness of the proposed ER rule framework through a case study of flange ring loosening fault diagnosis. In optical fiber communication, signals are transmitted through optical fibers, and optical fiber connectors are responsible for connecting the transmission between optical fibers. If the flange ring in the fiber optic connector is loose, it may cause the following problems:
(1)
Signal loss: Loose fiber optic flanges will lead to unstable connections between fibers, resulting in the failure of normal signal transmission and signal interruption.
(2)
Decrease in signal quality: Loose fiber optic flanges may result in incomplete connection between fibers, resulting in a decrease in signal quality. This can manifest as issues such as reduced signal strength, increased noise, or data loss.
(3)
Increased refraction loss: Loose fiber optic flanges will increase the refraction loss of light at the joint, reducing signal transmission efficiency.
In this section, a simulation experiment is carried out using a 6420A optical time domain reflectometer (OTDR). The optical sensor is used to convert the optical signals emitted by the laser into electrical signals to obtain 200 sets of data measuring the transmission loss, splice loss, and other physical characteristics of the optical fiber. The OTDR interface is shown in Figure 2, the OTDR parameter setting interface is shown in Figure 3, and the OTDR analysis interface is shown in Figure 4. The working principle of OTDR is that the light pulses emitted by the laser light source first propagate along the optical fiber, and part of the energy is returned to the OTDR port due to Rayleigh scattering and Fresnel reflection. The slope of Rayleigh scattering characterizes the fiber attenuation coefficient, while the Fresnel peaks reflect identify connectors, breakpoints, and other events. The returned scattered and reflected light is then directed to a photodetector through the directional coupler, which converts the optical signal to an electrical signal. Finally, after setting the parameters, the return curve, connection loss, reflection, and cumulative loss are displayed on the LCD screen through analysis and calculation. Therefore, when the flange ring of the optical fiber interface is loose, the propagation of light pulses will be affected, and the connection loss along the optical fiber and the reflections at various locations will also change. At the same time, combined with domain knowledge, the attributes affected by the loosening of the flange ring are found to include connection loss, reflection at the port, reflection at the connection, reflection at the end, and cumulative connection (including two types, as two attributes are considered) and looseness degree; these six attributes are utilized in this section to assess the faults of the fiber flange ring looseness, with the degree of looseness being the final diagnosis. The experimental data are shown in Figure 5.
The obtained data use the maximum mutual information coefficient to calculate the correlation coefficient of each evidence, as shown in Figure 6. The meanings of the letters in the figure are shown in Table 1.
We converted the experimental data into the corresponding belief distribution and selected some data as shown in Table 2.
Eventually, the MICER Rule model in Section 3.2 is built and reasoned through, while the MLER Rule method in Section 2.1, maximum likelihood evidential reasoning (MAKER) [35], and the traditional ER rule method are used as a comparative experiment, and a comparison of the final diagnosis results of the three methods is shown in Figure 7. The correlation coefficient calculated by the MLER rule is shown in Figure 8.
As shown in Figure 7, the proposed method achieves superior fault diagnosis accuracy compared to the MLER rule framework and MAKER. This performance advantage is attributed to the hybrid linear/nonlinear data characteristics being effectively addressed in correlation coefficient computation, a critical limitation of the MLER and MAKER approaches. Concurrently, traditional ER rule implementation methods also produce lower troubleshooting accuracies due to the failure to consider the interdependencies between evidence. The proposed methodology’s validity has been rigorously validated through comprehensive case studies encompassing both modeling and reasoning processes.

4.2. Case of Flywheel System Fault Diagnosis

To verify the generalization ability of the model, the flywheel system friction torque fault is selected for the fault diagnosis case study in this paper. In the acquisition of indicators for flywheel system fault diagnosis, two observations of shaft temperature and friction torque can be obtained through temperature sensors and torque sensors deployed on the bearings, while three observations of current, voltage, and speed can be obtained through the motor. The friction torque failure of the flywheel system will directly affect the change rule of the shaft temperature and rotational speed of the flywheel. Friction generates heat, which can rapidly raise the temperature of the bearings, while the rotational speed of the flywheel changes weakly due to closed-loop control. To overcome the impact of friction torque on the motor output torque, the control system will increase the voltage and current of the flywheel. At the same time, it causes the motor output torque to increase, and the motor output torque is proportional to the flywheel speed. Ultimately, the closed-loop control stabilizes the flywheel speed around the target value, while the rotational speed is affected by current and voltage. The health and fault data of the flywheel used in this paper are the telemetry and simulated values of the flywheel observation variables, as shown in Figure 9.
The obtained data were used to calculate the correlation coefficients of each component using the maximum mutual information coefficient, as shown in Figure 10.
The model in this paper converts the experimental data into corresponding belief distributions, some of which were selected as shown in Table 3.
Eventually, the MICER Rule model in Section 3.2 is built and reasoned through, while the MLER Rule method in Section 2.1 and MAKER are used as a comparison experiment, and the comparison of the final fault diagnosis results of the two methods with the real values is shown in Figure 11. The correlation coefficients calculated by the MLER Rule are shown in Figure 12.
As demonstrated in Figure 11, enhanced diagnostic accuracy is achieved by the proposed methodology, exhibiting significantly closer alignment with truth values compared to the MLER Rule framework and MAKER. The method proposed in this section is capable of both health evaluation and fault diagnosis, further demonstrating the generalization of the model.

5. Conclusions

This study proposes the MICER framework to address two critical limitations of conventional evidential reasoning (ER) in fault diagnosis: the independent assumption of evidence and the inability to quantify nonlinear evidence correlations. The key methodological contributions are systematically summarized as follows.
1. Unified Correlation Analysis for Heterogeneous Evidence
The framework introduces a novel approach to simultaneously process linear and nonlinear interdependencies among multi-source sensor data. By integrating maximum mutual information coefficient (MIC) theory, it overcomes the constraints of currently available ER methods that consider correlation, effectively capturing complex interaction patterns (e.g., inverse temperature–humidity relationships in fault diagnosis where environmental effects need to be taken into account) that traditional linear metrics fail to detect.
2. Joint Probability Modeling with MIC Quantification
A nonparametric joint probability modeling mechanism is established through MIC’s adaptive grid partitioning, enabling correlation quantification without prior assumptions on evidence distributions. This allows robust fusion of heterogeneous sensor data (vibration, thermal, chemical) under uncertainty, as validated in flywheel system diagnostics.
3. Generalized ER Rule Reformulation
The conventional Dempster combination rule is extended through recursive computation architectures that embed MIC-derived correlation weights. This reformulation resolves conflicts arising from unmodeled evidence dependencies while maintaining compatibility with existing ER implementations. The recursive form specifically addresses multi-stage fusion scenarios in static systems, avoiding combinatorial complexity growth.
Two critical research frontiers emerge from this work: first, the development of causal correlation discrimination frameworks to address spurious sensor data associations, particularly crucial for systems with time-varying operating conditions; second, the integration of online learning mechanisms to enable MICER’s autonomous adaptation to emerging fault patterns in evolving industrial systems. Future investigations should prioritize establishing standardized protocols for correlation hierarchy analysis and validation benchmarks for next-generation evidence fusion systems.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L. and S.D.; validation, S.L., L.C. and G.H.; writing—original draft preparation, S.L.; data curation, S.L. and H.G.; writing—original draft preparation, S.L.; writing—review and editing, S.L. and G.H.; visualization, S.L.; supervision, L.C.; project administration, L.C. and G.H.; funding acquisition, L.C. and G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China, grant numbers U22A2099, 62273113, 62203461, 62203365, and by the Innovation Project of Guangxi Graduate Education, grant number YCBZ2023130, and by the Guangxi Higher Education Undergraduate Teaching Reform Project Key Project, grant number 2022JGZ130.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EREvidential Reasoning
MICMaximum Mutual Information Coefficient
MICER RuleER Rule Framework Incorporating Maximum Mutual Information Coefficient
D-SDempster–Shafer
ERAER Algorithm
PSA_ERPower Set Domains ER
ICSIndustrial Control System
ER-IRVEvidential Reasoning Approach with Interval Reference Values
RTDCRelative Total Dependence Coefficients
MLER RuleMaximum Likelihood ER Rule
WBDRReliable Weighted Belief Distribution
BDBelief Distribution
OTDROptical Time Domain Reflectometer

References

  1. Liu, K.; Yang, T.; Ma, J.; Cheng, Z. Fault-tolerant event detection in wireless sensor networks using evidence theory. KSII Trans. Internet Inf. Syst. TIIS 2015, 9, 3965–3982. [Google Scholar]
  2. Sezer, S.I.; Akyuz, E.; Arslan, O. An extended HEART Dempster–Shafer evidence theory approach to assess human reliability for the gas freeing process on chemical tankers. Reliab. Eng. Syst. Saf. 2022, 220, 108275. [Google Scholar] [CrossRef]
  3. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–72. [Google Scholar]
  4. Du, Y.W.; Zhong, J.J. Generalized combination rule for evidential reasoning approach and Dempster–Shafer theory of evidence. Inf. Sci. 2021, 547, 1201–1232. [Google Scholar] [CrossRef]
  5. Yang, J.B.; Singh, M.G. An evidential reasoning approach for multiple-attribute decision making with uncertainty. IEEE Trans. Syst. Man Cybern. 1994, 24, 1–18. [Google Scholar] [CrossRef]
  6. Yang, J.B.; Xu, D.L. On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2002, 32, 289–304. [Google Scholar] [CrossRef]
  7. Yang, J.B.; Xu, D.L. Evidential reasoning rule for evidence combination. Artif. Intell. 2013, 205, 1–29. [Google Scholar] [CrossRef]
  8. Sun, T.; Liu, B. Health Assessment Based on DS Evidence Theory of Equipment. Int. J. Adv. Netw. Monit. Control. 2020, 5, 15–27. [Google Scholar] [CrossRef]
  9. Zhou, X.; Zhou, Z.; Han, X.; Ming, Z.; Bian, Y. Evaluating LIMU System Quality with Interval Evidence and Input Uncertainty. KSII Trans. Internet Inf. Syst. 2023, 17, 2945–2965. [Google Scholar]
  10. Wang, J.; Zhou, Z.-J.; Ming, Z.-C.; Zhang, P.; Lian, Z.; Li, Z.-G. Inference, optimization, and analysis of an evidential reasoning rule-based modeling approach. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 3907–3923. [Google Scholar] [CrossRef]
  11. Song, H.; Yuan, Y.; Wang, Y.; Yang, J.; Luo, H.; Li, S. A Security Posture Assessment of Industrial Control Systems Based on Evidential Reasoning and Belief Rule Base. Sensors 2024, 24, 7135. [Google Scholar] [CrossRef]
  12. Wang, J.; Zhou, Z.-J.; Hu, C.-H.; Tang, S.-W.; Cao, Y. A new evidential reasoning rule with continuous probability distribution of reliability. IEEE Trans. Cybern. 2021, 52, 8088–8100. [Google Scholar] [CrossRef] [PubMed]
  13. Alshamaa, D.; Mourad-Chehade, F.; Honeine, P.; Chkeir, A. An evidential framework for localization of sensors in indoor environments. Sensors 2020, 20, 318. [Google Scholar] [CrossRef]
  14. Yang, L.-H.; Liu, J.; Wang, Y.-M.; Martinez, L. A micro-extended belief rule-based system for big data multiclass classification problems. IEEE Trans. Syst. Man Cybern. Syst. 2018, 51, 420–440. [Google Scholar] [CrossRef]
  15. Kong, G.; Xu, D.-L.; Yang, J.-B.; Wang, T.; Jiang, B. Evidential reasoning rule-based decision support system for predicting ICU admission and in-hospital death of trauma. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 7131–7142. [Google Scholar] [CrossRef]
  16. Ning, P.; Zhou, Z.; Cao, Y.; Tang, S.; Wang, J. A concurrent fault diagnosis model via the evidential reasoning rule. IEEE Trans. Instrum. Meas. 2021, 71, 3500916. [Google Scholar] [CrossRef]
  17. Xu, X.; Huang, W.; Zhang, X.; Zhang, Z.; Liu, F.; Brunauer, G. An evidential reasoning-based information fusion method for fault diagnosis of ship rudder. Ocean Eng. 2025, 318, 120082. [Google Scholar] [CrossRef]
  18. Xiong, J.; Li, C.; Cen, J.; Liang, Q.; Cai, Y. Fault diagnosis method based on improved evidence reasoning. Math. Probl. Eng. 2019, 2019, 7491605. [Google Scholar] [CrossRef]
  19. Wang, G.; Zhang, Y.; Zhang, F.; Wu, Z. An ensemble method with DenseNet and evidential reasoning rule for machinery fault diagnosis under imbalanced condition. Measurement 2023, 214, 112806. [Google Scholar] [CrossRef]
  20. Jina, E.; Zhang, Y.; He, W.; Zhang, W.; Cao, Y.; Lv, G. A new method for oil-immersed transformers fault diagnosis based on evidential reasoning rule with optimized probabilistic distributed. IEEE Access 2024, 12, 34289–34305. [Google Scholar]
  21. Wang, G.; Zhang, F.; Cheng, B.; Fang, F. DAMER: A novel diagnosis aggregation method with evidential reasoning rule for bearing fault diagnosis. J. Intell. Manuf. 2021, 32, 1–20. [Google Scholar] [CrossRef]
  22. Liao, L.; Gao, H.; He, Y.; Xu, X.; Lin, Z.; Chen, Y.; You, F. Fault diagnosis of capacitance aging in DC link capacitors of voltage source inverters using evidence reasoning rule. Math. Probl. Eng. 2020, 2020, 5724019. [Google Scholar] [CrossRef]
  23. Jia, Q.; Hu, J.; Zhang, W. A novel fault detection model based on atanassov’s interval-valued intuitionistic fuzzy sets, belief rule base and evidential reasoning. IEEE Access 2019, 8, 4551–4567. [Google Scholar] [CrossRef]
  24. Zhang, P.; Zhou, Z.; Wang, J.; Tang, S.; Zhao, D. An evidential reasoning-based fault detection method for multi-mode system. Measurement 2022, 193, 110942. [Google Scholar] [CrossRef]
  25. Yang, J.B.; Xu, D.L. Inferential modelling and decision making with data. In Proceedings of the 2017 23rd International Conference on Automation and Computing (ICAC), Huddersfield, UK, 7–8 September 2017; IEEE: Piscataway, NY, USA, 2017; pp. 1–6. [Google Scholar]
  26. Tang, S.-W.; Zhou, Z.-J.; Hu, G.-Y.; Cao, Y.; Ning, P.-Y.; Wang, J. Evidential reasoning rule with likelihood analysis and perturbation analysis. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 1209–1221. [Google Scholar] [CrossRef]
  27. He, H.; Zhang, X.; Xu, X.; Li, Z.; Bai, Y.; Liu, F.; Steyskal, F.; Brunauer, G. A Data Classifier Based on Maximum Likelihood Evidential Reasoning Rule. Math. Probl. Eng. 2023, 2023, 5933793. [Google Scholar] [CrossRef]
  28. Zhang, P.; Zhijie, Z.; Shuaiwen, T.; Jie, W.; Guanyu, H.U.; Dao, Z.; You, C.A.O. On the evidential reasoning rule for dependent evidence combination. Chin. J. Aeronaut. 2023, 36, 306–327. [Google Scholar] [CrossRef]
  29. Reshef, D.N.; Reshef, Y.A.; Finucane, H.K.; Grossman, S.R.; McVean, G.; Turnbaugh, P.J.; Lander, E.S.; Mitzenmacher, M.; Sabeti, P.C. Detecting novel associations in large data sets. Science 2011, 334, 1518–1524. [Google Scholar] [CrossRef]
  30. Sun, G.; Li, J.; Dai, J.; Song, Z.; Lang, F. Feature selection for IoT based on maximal information coefficient. Future Gener. Comput. Syst. 2018, 89, 606–616. [Google Scholar] [CrossRef]
  31. Liu, M.; Yang, Z.; Guo, Y.; Jiang, J.; Yang, K. MICAR: Nonlinear association rule mining based on maximal information coefficient. Knowl. Inf. Syst. 2022, 64, 3017–3042. [Google Scholar] [CrossRef]
  32. Wang, N.; Ren, J.; Luo, Y.; Guo, K.; Liu, D. UAV actuator fault detection using maximal information coefficient and 1-D convolutional neural network. In Proceedings of the 2021 Global Reliability and Prognostics and Health Management (PHM-Nanjing), Nanjing, China, 15–17 October 2021; IEEE: Piscataway, NY, USA, 2021; pp. 1–6. [Google Scholar]
  33. Zhou, Z.J.; Tang, S.W.; Hu, C.H.; Cao, Y.; Wang, J. Evidential reasoning theory and its applications. J. Autom. 2021, 47, 970–984. [Google Scholar]
  34. Hou, Z.-J.; Zhang, P.; Wang, J.; Zhang, C.-C.; Chen, L.-Y.; Ning, P.-Y. Inference Analysis on the Evidential Reasoning Rule Under Evidence Weight Variations. IEEE Trans. Aerosp. Electron. Syst. 2023, 60, 430–448. [Google Scholar]
  35. Yang, J.B.; Xu, D.L. Maximum Likelihood Evidential Reasoning. Artif. Intell. 2025, 340, 104289. [Google Scholar] [CrossRef]
Figure 1. Evidence-related evidential reasoning rule structure.
Figure 1. Evidence-related evidential reasoning rule structure.
Sensors 25 03111 g001
Figure 2. The OTDR interface.
Figure 2. The OTDR interface.
Sensors 25 03111 g002
Figure 3. The OTDR parameter setting interface.
Figure 3. The OTDR parameter setting interface.
Sensors 25 03111 g003
Figure 4. The OTDR analysis interface.
Figure 4. The OTDR analysis interface.
Sensors 25 03111 g004
Figure 5. The experimental data.
Figure 5. The experimental data.
Sensors 25 03111 g005
Figure 6. MIC value plot among the evidence.
Figure 6. MIC value plot among the evidence.
Sensors 25 03111 g006
Figure 7. Comparison of diagnosis results of flange ring loosening.
Figure 7. Comparison of diagnosis results of flange ring loosening.
Sensors 25 03111 g007
Figure 8. The correlation coefficient calculated by the MLER rule in flange loosening diagnosis.
Figure 8. The correlation coefficient calculated by the MLER rule in flange loosening diagnosis.
Sensors 25 03111 g008
Figure 9. Experimental data of flywheel system.
Figure 9. Experimental data of flywheel system.
Sensors 25 03111 g009
Figure 10. Evidence correlation coefficient of flywheel system.
Figure 10. Evidence correlation coefficient of flywheel system.
Sensors 25 03111 g010
Figure 11. Comparison of flywheel system fault diagnosis results.
Figure 11. Comparison of flywheel system fault diagnosis results.
Sensors 25 03111 g011
Figure 12. The correlation coefficient calculated by the MLER rule in flywheel system fault diagnosis.
Figure 12. The correlation coefficient calculated by the MLER rule in flywheel system fault diagnosis.
Sensors 25 03111 g012
Table 1. The correlation coefficient of each piece of evidence.
Table 1. The correlation coefficient of each piece of evidence.
LetterMeaning
x1Connection loss
x2Reflection at the port
x3Reflection at the connection
x4Reflection at the end
x5Cumulative connection1
x6Cumulative connection2
x7Looseness degree
Table 2. Transformation of data from partial flange ring loosening into belief distributions.
Table 2. Transformation of data from partial flange ring loosening into belief distributions.
x1x2x3x4
β0.000.980.020.490.510.000.340.660.000.830.170.00
0.070.930.000.570.430.000.300.700.000.780.220.00
0.000.940.060.630.370.000.480.520.000.830.170.00
0.040.960.000.610.390.000.340.660.000.790.210.00
0.000.870.130.680.320.000.400.600.000.840.160.00
0.110.890.000.680.320.000.370.630.000.800.200.00
0.000.930.070.740.260.000.340.660.000.800.200.00
0.000.930.070.740.260.000.320.680.000.810.190.00
0.000.900.100.770.230.000.310.690.000.780.220.00
0.060.940.000.740.260.000.300.700.000.790.210.00
0.000.770.230.730.270.000.340.660.000.810.190.00
x5x6x7
β0.890.110.000.000.620.381.000.000.00
0.870.130.000.000.760.241.000.000.00
0.870.130.000.000.540.461.000.000.00
0.900.100.000.000.690.311.000.000.00
0.930.070.000.000.600.401.000.000.00
0.940.060.000.000.850.151.000.000.00
0.970.030.000.000.670.331.000.000.00
0.930.070.000.000.750.251.000.000.00
0.890.110.000.000.850.151.000.000.00
0.930.070.000.000.860.141.000.000.00
0.890.110.000.000.580.421.000.000.00
Table 3. Transformation of partial data from flywheel systems into belief distributions.
Table 3. Transformation of partial data from flywheel systems into belief distributions.
Electricity
x1
Voltage
x2
Rotational Speed
x3
β0.000.001.000.450.550.000.000.180.83
0.000.340.661.000.000.000.000.370.63
0.000.001.000.000.001.000.000.470.53
1.000.000.001.000.000.000.150.850.00
0.520.480.000.450.550.000.000.340.66
0.000.340.660.000.001.000.550.450.00
0.000.340.660.000.850.150.520.480.00
1.000.000.000.000.850.150.000.570.43
0.520.480.001.000.000.000.000.550.45
1.000.000.000.000.850.150.060.940.00
Axle temperature
x4
Friction torque
x5
β0.000.001.000.150.850.00
0.860.140.000.000.840.16
0.710.290.000.000.870.13
0.860.140.000.190.810.00
0.000.630.380.000.900.10
0.001.000.000.080.920.00
0.290.710.000.000.840.16
0.000.500.500.230.770.00
0.000.130.880.000.810.19
0.000.001.000.150.850.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, S.; Hu, G.; Du, S.; Gao, H.; Chang, L. A New Evidential Reasoning Rule Considering Evidence Correlation with Maximum Information Coefficient and Application in Fault Diagnosis. Sensors 2025, 25, 3111. https://doi.org/10.3390/s25103111

AMA Style

Liu S, Hu G, Du S, Gao H, Chang L. A New Evidential Reasoning Rule Considering Evidence Correlation with Maximum Information Coefficient and Application in Fault Diagnosis. Sensors. 2025; 25(10):3111. https://doi.org/10.3390/s25103111

Chicago/Turabian Style

Liu, Shanshan, Guanyu Hu, Shaohua Du, Hongwei Gao, and Liang Chang. 2025. "A New Evidential Reasoning Rule Considering Evidence Correlation with Maximum Information Coefficient and Application in Fault Diagnosis" Sensors 25, no. 10: 3111. https://doi.org/10.3390/s25103111

APA Style

Liu, S., Hu, G., Du, S., Gao, H., & Chang, L. (2025). A New Evidential Reasoning Rule Considering Evidence Correlation with Maximum Information Coefficient and Application in Fault Diagnosis. Sensors, 25(10), 3111. https://doi.org/10.3390/s25103111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop