Next Article in Journal
Ecological Vulnerability Assessment and Spatiotemporal Characteristics Analysis of Urban Green-Space Systems in Beijing–Tianjin–Hebei Region
Previous Article in Journal
Investigating the Factors That Sustain College Teachers’ Attitude and Behavioral Intention toward Online Teaching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Essay

Sustainable Decision-Making Enhancement: Trust and Linguistic-Enhanced Conflict Measurement in Evidence Theory

1
School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
2
School of Architecture and Urban Planning, Tongji University, Shanghai 200082, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work and should be considered co-first authors.
Sustainability 2024, 16(6), 2288; https://doi.org/10.3390/su16062288
Submission received: 23 January 2024 / Revised: 27 February 2024 / Accepted: 7 March 2024 / Published: 9 March 2024

Abstract

:
This research presents an advanced methodology to enhance conflict measurement within the Dempster–Shafer framework, integrating linguistic preferences and trust relationships for improved sustainability decision-making. By developing a unique algorithm, we introduce a novel approach to quantify inter-expert similarity and establish consensus thresholds. Furthermore, our study innovates with a dual-path adjustment mechanism to effectively reconcile discrepancies in expert opinions. These methodological advancements enable a more accurate and nuanced representation of expert judgments, facilitating superior decision support in sustainability-oriented applications. Through rigorous numerical simulations and a detailed case study, we validate our approach’s efficacy in optimizing decision-making processes, underscoring its potential to significantly influence sustainable practices and policy formulation. Our contributions not only advance theoretical understanding but also offer practical tools for incorporating expert insights into the pursuit of sustainability goals, marking a significant leap forward in decision-making research.

1. Introduction

In the quest for sustainability, decision-making processes are increasingly confronted with the challenge of navigating through complex and often conflicting information. The essence of sustainability lies in its inherent complexity, involving multifaceted interactions between environmental, economic, and social factors. This complexity requires decision-making frameworks that are not only robust but also flexible enough to accommodate diverse expert opinions and uncertain evidence. Traditional decision-making tools, while useful in deterministic settings, often fall short in addressing the nuanced uncertainties and conflicts characteristic of sustainability issues.
The motivation for this research stems from the critical need to enhance the efficacy of decision-making in sustainability by effectively managing uncertainty and conflict among expert opinions. Recognizing the limitations of existing frameworks, our study seeks to leverage the Dempster–Shafer theory, renowned for its capacity to handle uncertain information, by integrating linguistic preferences and trust relationships. This innovative approach is aimed at enriching the decision-making process, allowing for more nuanced interpretation and integration of expert judgments. The incorporation of linguistic preferences acknowledges the qualitative nature of much sustainability-related information, while trust relationships offer a means to assess and weight expert contributions, providing a more comprehensive and reliable basis for decisions.
Since its inception, the Dempster–Shafer (D-S) Theory of Evidence [1,2,3,4,5,6,7,8] has blazed a brilliant trail in the articulation and integration of uncertain information, which is critical to advancing the Sustainable Development Goals. Its strength lies in its keen delineation of ‘uncertainty’ and ‘ignorance’, allowing for a nuanced understanding and treatment of incomplete information. The theory’s rules of combination are a powerful tool for merging evidence, even in cases where a priori probabilities are elusive, thus achieving satisfactory inference results through its concise reasoning framework.
This powerful theoretical construct has been widely used in a variety of fields, as highlighted by its utility in data fusion, pattern recognition, artificial intelligence [9,10,11,12,13,14,15], expert diagnosis, economic forecasting [16,17,18,19,20,21,22], and multi-attribute group decision making [8,23,24,25,26,27,28,29,30], all of which are integral to building sustainable development. In the area of data fusion [31,32,33,34,35,36,37,38], D-S theory helps integrate information from different sensors or data sources to provide more accurate estimates and decisions. This proves invaluable in complex environments where target tracking or intelligence analysis is critical, and it is here that the theory shines in sustainability-oriented applications that can enhance environmental monitoring and make more informed resource management decisions. In the area of pattern recognition [39,40,41], the theory can classify and recognize patterns in ambiguous and incomplete information. By integrating different features and chains of evidence, it greatly improves the accuracy of pattern recognition systems. These systems are crucial for sustainable development as they can be applied to biodiversity conservation, climate change modeling, and monitoring of sustainable development indicators. Expert diagnosis [42,43,44] is another area where the strengths of the D-S theory are demonstrated. It is adept at integrating the opinions and insights of multiple experts to make more reliable and consultative decisions. This is critical for medical diagnosis and risk assessment [45,46,47,48,49,50,51], areas that underpin sustainable health systems [52,53,54,55], and disaster risk reduction strategies.
Despite the recognized potential of the Dempster–Shafer theory in managing uncertainty, a notable gap exists in its application to sustainability decision-making. Current methodologies inadequately capture the complexity of integrating linguistic preferences and evaluating trust among experts, which are crucial for resolving conflicts in expert opinions. This gap is particularly pronounced in sustainability contexts, where decisions often rely on the delicate balance of diverse expert insights. Our research addresses this gap by proposing a novel algorithm that systematically incorporates these elements into the Dempster–Shafer framework, thereby enhancing its utility in complex decision-making scenarios.
This study makes several key contributions to the field of sustainability decision-making. Firstly, it introduces an innovative algorithm for the integration of linguistic preferences and trust relationships into the Dempster–Shafer theory, offering a more sophisticated tool for conflict measurement and resolution. Secondly, it develops a comprehensive methodology for calculating inter-expert similarity and consensus thresholds, facilitating a deeper understanding and management of expert opinion diversity. Lastly, through the introduction of a dual-path adjustment mechanism, our approach allows for the dynamic reconciliation of conflicting expert opinions, ensuring decisions are both robust and reflective of expert consensus. These contributions represent a significant advancement in the application of the Dempster–Shafer theory to sustainability decision-making, providing a more effective framework for navigating the complexities of sustainability challenges.
The primary objective of this paper is to enhance the efficacy and applicability of the Dempster–Shafer theory in sustainability decision-making by incorporating linguistic preferences and trust relationships. This aims to address the identified gaps in current methodologies, particularly in conflict measurement and resolution among expert opinions. To achieve this, the paper is structured as follows: Section 2 delves into the theoretical background, emphasizing the relevance of the Dempster–Shafer theory in handling uncertainty. Section 3 introduces our novel algorithm, detailing its development and the integration of linguistic preferences and trust relationships. Section 4 embarks on a thorough numerical simulation to contrast the proposed algorithm with existing methodologies, highlighting the improvements and verifying its robustness and stability by introducing noise elements. Section 5 transitions our theoretical advancements into practical application through a case study on village planning, demonstrating the tangible benefits and applicability of our approach in real-world sustainability challenges. Finally, Section 6 concludes with a discussion of the findings, limitations, and potential avenues for future research. This structured approach ensures a coherent flow of information, facilitating a deeper understanding of our contributions to the field of sustainability decision-making.

2. Preliminaries

2.1. Recognition Framework

In the context of the Dempster–Shafer evidence theory, a non-empty set ϱ, encompassing all possible outcomes of identifiable objectives within the existing cognition, is referred to as a complete set ⨀, which satisfies the conditions that its elements are pairwise mutually exclusive and collectively exhaustive. Such a comprehensive set is denoted as a recognition framework and may be represented as:
= ϱ 1 , ϱ 2 , ϱ 3 ϱ λ
where ϱ i , i = 1, 2, 3, …, λ, represents any individual sub-set proposition within the recognition framework ⨀, and λ denotes the number of such sub-set propositions.
The collection of all subsets within the recognition framework constitutes the power set of ⨀, denoted as 2 , and may be represented as:
2 = , ϱ 1 , ϱ λ , ϱ 1 ϱ 2 , ϱ 1 ϱ 3 ,
where ∅ represents the empty set, and ϱ j ϱ k represents any compound subset proposition within the recognition framework ⨀, which may also be denoted as ϱ j , ϱ k . When the set ⨀ contains λ individual sub-set propositions, the corresponding power set 2 comprises 2 λ elements.

2.2. Basic Probability Assignment Function

The BPA is utilized to describe the uncertainty regarding a set of propositions and their relationship with the evidence. The BPA function is a mapping from a set of possible propositions to the [0,1] interval, assigning a probability value to each proposition, denoting the degree of support for that proposition given the available evidence. The definition of BPA is as follows:
Suppose there is a set of proposition collection, typically represented as H = { H 1 , H 2 , H 3 H n }, where H i represents distinct propositions. The BPA function is a mapping m from the proposition set to the probability interval [0,1]: BPA: H → [0,1]. This mapping m adheres to the following conditions:
  0 m ( H ) 1 H m H = 1 m = 0
In this context, m is referred to as the BPA function or mass function on the recognition framework ⨀, and m(H) represents the BPA or mass for proposition H, signifying the basic belief in proposition H. When m(H) > 0, it is termed as a focal element for the subset H, and the union of all focal elements is designated as the core, denoted as k(m).
The primary purpose of the BPA function is to depict the level of uncertainty regarding each proposition and allocate uncertainty to different propositions. These uncertainty values can be used to calculate the degree of belief concerning the entire proposition space.

2.3. Dempster’s Evidence Combination Rule

The BPA functions can be employed to combine the BPA from different pieces of evidence to compute the total belief in a particular subset. Let us assume that, within the recognition framework, there are two BPA functions, denoted as m 1   a n d   m 2 , representing distinct sources of evidence. These BPA functions contain focal elements H μ and H ν , and the calculation of the combined BPA through the evidence combination rule is as follows:
m H = m 1 m 2 ( H ) = 0 A H ( H μ A m 1 ( H μ ) · H ν ( H \ A ) m 2 ( H ν ) ) 1 k
where k = A H μ H ν ( H μ A m 1 ( H μ ) · H ν ( H \ A ) m 2 ( H ν ) ) , and k serves as a normalization factor, typically employed to normalize the combined BPA function, ensuring that the total belief after combining falls within the [0,1] interval. If the value of k approaches 1, it indicates minimal conflict between the two pieces of evidence, and the normalization of belief is closer to the original belief values. However, if the value of k approaches 0, it signifies significant conflict between the pieces of evidence, and the normalization of belief becomes more pronounced, resulting in a flatter belief distribution.

2.4. Probability Linguistic Term Set

In certain practical decision evaluations, achieving precise quantitative assessments can often be challenging. To facilitate the description of qualitative judgments from various experts, Professor Zadeh [23] introduced the concept of linguistic variables, which feature fuzzy and imprecise language descriptions.
Suppose S = { S η η = 0, 1, ⋯ 2ξ} is a finite, totally ordered, and discrete set of linguistic terms, where ξ represents a positive integer and S η represents the possible values of linguistic variables. This set can be referred to as a linguistic term set if it satisfies the following conditions:
  • The set is ordered: S δ > S γ if and only if δ > γ.
  • Negative elements exist: Neg( S η )= S 2 ξ η ).
For example, when ξ = 2, a five-level linguistic term set S can be defined as follows:
S = { S 0 = Poor, S 1 = Slightly Poor, S 2 = Fair, S 3 = Slightly Good, S 4 = Good}
Different linguistic terms within a linguistic term set often carry varying degrees of importance. Therefore, the evaluation information includes not only multiple linguistic terms but also their corresponding occurrence probabilities (confidence or support). Hence, Pang [28] and others have introduced the concept of a probability linguistic term set based on existing relevant models.
Assuming that S = { S η η = 0, 1, ⋯ 2ξ} is a known linguistic term set, a probability linguistic term set can be defined as follows:
L ( p ) = L ψ ( p ψ ) L ψ S , p ψ 0 , ψ = 1,2 , , # L ( p ) , ψ = 1 # L ( p ) p ψ 1
In this definition, L ψ ( p ψ ) represents the ψ-th linguistic term   L ψ along with its occurrence probability p ψ , collectively referred to as a probability linguistic element. #L(p) denotes the number of linguistic terms in L(p), with linguistic terms L ψ (ψ = 1, 2, ⋯, #L(p)) typically arranged in ascending order.

2.5. Generalized Similarity

Assuming L 1 ( p ) and L 2 ( p ) are two probability linguistic term sets under the same linguistic term standard, the distance between L 1 ( p ) and L 2 ( p ) is defined as:
D ω ( L 1 ( p ) , L 2 ( p ) )   =   ( l = 1 # l p ( L 1 ε l 1 , L 2 ε l 2 ) Τ ) 1 α
where, T = β 1 ε l 1 β 2 ε l 2 Λ α , and Λ represents the number of terms in the probability linguistic term set. The corresponding generalized similarity is defined as:
C ω ( L 1 ( p ) , L 2 ( p ) )   =   1 D ω ( L 1 ( p ) , L 2 ( p ) )
The similarity exhibits the following properties:
0     C ω ( L 1 ( p ) , L 2 ( p ) )     1 C ω ( L 1 ( p ) , L 2 ( p ) )   =   C ω ( L 2 p ,   L 1 ( p ) ) C ω ( L 1 ( p ) , L 2 ( p ) )   = 1   only   when   L 1 ( p ) = L 2 p .

2.6. Linguistic Preferences

Linguistic preferences represent a sophisticated mechanism for articulating subjective evaluations, judgments, or perceptions using a predefined spectrum of qualitative descriptors. This method is instrumental in decision-making contexts where quantifiable data are elusive or insufficient to capture the intricacies of human cognition and preference. In the realm of fuzzy set theory, linguistic preferences are operationalized through linguistic variables, each associated with a set of linguistic terms that are qualitatively ordered and potentially quantified through fuzzy numbers.
Mathematical Expression: A linguistic preference can be mathematically depicted as a tuple L = ( l 1 , l 2 ,… l n ,), where each li is a linguistic term that can be further quantified into fuzzy numbers using membership functions, μ l i (x), to map each term to a continuum of truth values within the interval [0,1].

2.7. Trust Relationships

Trust relationships encapsulate the qualitative dimension of reliance and confidence among entities (e.g., experts, decision-makers) within a collaborative decision-making ecosystem. These relationships are predicated on historical interactions, proven expertise, and the perceived relevance and reliability of contributions, forming a complex network of subjective interdependencies. Trust relationships are quantified through trust matrices or networks, facilitating the weighted aggregation of expert opinions based on the trustworthiness of sources.
Mathematical Expression: Formally, a trust relationship matrix T can be represented where t i j denotes the trust score from expert i to expert j, with t i j ∈[0,1] and for all i, j = 1 n t i j = 1 , ensuring normalization of trust scores across the network.

2.8. Trust Scores

Trust scores quantitatively assess the credibility or reliability of each participant’s input within a decision-making framework, serving as a critical metric for evaluating the influence of individual contributions on collective outcomes. Derived from an amalgamation of trust relationships, historical accuracy, and contextual relevance, trust scores dynamically modulate the weighting of expert opinions, thereby optimizing the decision-making process for enhanced objectivity and accuracy.
Mathematical Expression: Trust scores can be formally expressed as a vector S = [ S 1 , S 2 S n ], where S i signifies the computed trust score for the i t h expert. The computation of S i integrates various dimensions of trustworthiness, potentially including direct trust, reputation-based trust, and context-specific trust factors.

2.9. Consensus Threshold

The consensus threshold establishes a quantifiable benchmark for achieving agreement among participants in a decision-making process, delineating the minimum level of concordance required for a decision to be accepted as reflecting a collective stance. This threshold is pivotal in mitigating the ambiguity inherent in group decisions, ensuring that outcomes are representative of a genuine consensus rather than a plurality or simple majority, thus enhancing the legitimacy and acceptance of decisions.
Mathematical Expression: The consensus threshold, denoted as C t h , is typically defined as a proportion or percentage, C t h ∈[0,1], indicating the requisite level of agreement among participants. The specific value of C t h is determined based on the context and desired stringency of consensus, with higher values indicating a stricter requirement for alignment among expert opinions.

3. The New PLDS Method for Conflict Measurement

The method proceeds through the following steps:
(1)
Calculate the pairwise similarity between experts using cosine similarity and construct an expert similarity matrix.
(2)
Assess internal consistency within the group of experts.
(3)
Set a consensus threshold (average of the similarity matrix + standard deviation).
(4)
Identify conflicting experts by comparing their similarity scores to the threshold (outputting the pairs of conflicting experts).
(5)
An improved fusion method is proposed, consisting of the following steps:
  • Adjust the opinion weights of conflicting experts.
  • Modify the opinions of conflicting experts until conflicts no longer occur.
    • For each pair of conflicting experts, calculate their trust scores toward other experts.
    • Each conflicting expert adjusts their opinions based on the trust scores toward other experts.
    • Check for any remaining conflicts; if conflicts persist, return to step one, otherwise end the iteration.
For a set of m decision alternatives, denoted as m = { m 1 , m 2 , m ξ }, and n experts using the linguistic term set L = {( L α α = ζ , 1,0 , 1 ζ } to perform pairwise comparisons of these alternatives, we can obtain n normalized probability linguistic preference matrices T φ . These matrices are constructed based on the expert comparisons. Each entry in the matrix represents the preference of an expert for one alternative over another, expressed in terms of probabilities assigned to linguistic terms. The exact construction of T φ depends on the details of the expert comparisons and the specific method used to derive these preferences.
S 0 L 12 ( P ) L 1 ξ ( P )
S 0 L 2 ξ ( P )
S 0
Assume two normalized probability linguistic preference matrices T 1 = ( L i j p 1 ) ξ ξ and T 2 = ( L i j p 2 ) ξ ξ , and calculate their similarity using the given similarity calculation formula:
S i ξ ξ = j = 1 ξ d ( L i j ( p ) α , L i j ( p ) β ) ξ ξ 2 ξ ( ξ 1 ) j = 1 ξ d ( L i j ( p ) α , L i j ( p ) β ) ξ ξ 2
The similarity between the two matrices T 1 and T 2 can be expressed as follows:
S i ( T 1 ,   T 2 ) = j = 1 ξ d ( L i j ( p ) 1 , L i j ( p ) 2 ) 2 ξ   ( ξ 1 ) j = 1 ξ d ( L i j ( p ) 1 , L i j ( p ) 2 ) 2
where S i ( T 1 ,     T 2 ) closer to 1 indicates a higher degree of similarity between T 1 and T 2 . To obtain the similarity matrix for all experts, just follow these steps:
  • Calculate the similarity between each pair of experts using the given similarity formula S i ( T 1 ,     T 2 ) for their respective preference matrices.
  • Store the calculated similarity values in a matrix where the rows and columns correspond to the experts being compared.
  • Repeat this process for all possible pairs of experts, resulting in a complete similarity matrix that captures the similarities between all experts.
The final matrix will provide a comprehensive view of the similarity relationships among all the experts based on their respective preference matrices.
1 S i 12 S i 1 ξ
1 S i 2 ξ
1
The Intra-class Correlation Coefficient (ICC) is a generalized correlation coefficient used to measure the consistency among multiple raters or observers. The ICC method can be applied to various types of data, including continuous data, binary data, and categorical data. For a set of data, assume there are n raters or observers and k items.
The calculation formula of ICC is based on the Analysis of Variance (ANOVA) model and takes into account different components of the data. The Average Measurement ICC is used to assess the consistency of raters or observers and considers differences in the average measurement results of different raters for the same item. It provides a quantitative measure of the agreement among multiple observers or raters for assessing the same set of items.
The Average Measurement ICC:
M s β M s ω M s β + k 1 M s ω + M s ε n
Indeed, in the context of ICC, M s β represents the variance between items (projects or entities), M s ω is the variance within raters (reviewers or observers), and M s ε is the error variance. The ICC values can help assess the consistency among raters or observers by considering the variances between items, within raters, and the error variance. A higher ICC value, close to 1, indicates high consistency among raters, whereas a lower ICC value, closer to 0, suggests lower consistency, and this can occur when both within-rater and error variances are high.
Regarding the calculation of a consensus threshold, it is essential for the process of conflict identification based on the similarity matrix. By comparing the similarity between pairs of experts with the consensus threshold, you can determine if conflicts exist. When the similarity falls below the consensus threshold, it signifies the presence of conflicts. The consensus threshold serves as a critical criterion for identifying disagreements or conflicts among experts and is a key component in the process of conflict resolution or decision-making.
The formula for calculating the consensus threshold Ct is as follows:
C t i j   =   M i j   +   S d i j
Here, M i j represents the mean (average), and S d i j represents the standard deviation.
For the similarity matrix S i j , the formula to calculate the normalized similarity matrix N i j is as follows:
N i j = S i j m a x S
where m a x S represents the maximum value in the similarity matrix S i j .
For the calculation of the average M i j of the normalized similarity matrix N i j , the formula is as follows:
M i j = j = 1 ξ N i j ξ 2
where j = 1 ξ N i j , ij represents the sum of all elements in the normalized similarity matrix N i j , and ξ represents the dimension of the similarity matrix.
To calculate the standard deviation   S d i j of the normalized similarity matrix N i j , the formula is as follows:
S d i j = j = 1 ξ ( N i j M i j ) 2 ξ 2
These calculations help determine the consensus threshold based on the normalized similarity matrix, its mean, and its standard deviation. The consensus threshold is a crucial parameter for identifying conflicts among experts.
To adjust the opinion weights of conflicting experts, use the steps as follows:
  • Calculate the average similarity for each expert:
    S i A = j i S M i j n 1
    where S M i j represents the similarity between expert i and all other experts, and n is the total number of experts.
  • Calculate the mean of the average similarities:
    S i M = i = 1 n S i A i n
    where S i A i is the average similarity value for the i-th expert, and n is the total number of experts.
  • Calculate the standard deviation of the average similarities:
    S i S t d = i = 1 n ( S i A i S i M ) 2 n
  • The calculation of the new weight coefficients:
For expert i:
W i = S i M i S i M i + S i s t d i
where:
S i M i is the mean of the average similarity values of expert i’s matrix.
S i s t d i is the standard deviation of the average similarity values of expert i’s matrix.
For expert j:
W j = S i M j S i M j + S i s t d j
where:
S i M j is the mean of the average similarity values of expert j’s matrix.
S i s t d j is the standard deviation of the average similarity values of expert j’s matrix.
These weight coefficients are calculated based on the average similarity and standard deviation of the average similarity for each expert’s matrix. These coefficients are used to adjust the weights assigned to experts i and j in the modified fusion process, taking into account their respective performance in terms of similarity and consistency.
In this study, “trust” is conceptualized as a multidimensional construct that encompasses both an expert’s ”competence” and ”intention”. Competence refers to the expert’s domain-specific knowledge, skills, and experience that contribute to their ability to provide insightful and accurate assessments within the context of our research. This includes their professional qualifications, track record of contributions to the field, and the relevance of their expertise to the decision-making process at hand.
Intention, on the other hand, reflects the expert’s commitment to providing honest, unbiased, and objective input. It assesses the sincerity of the expert’s participation in the decision-making process, aiming to gauge their willingness to share knowledge without personal bias or ulterior motives. Intention is critical for ensuring that the expert contributions are guided by a genuine desire to aid the decision-making process, rather than by personal gain or external influences.
Together, these dimensions form the foundation of trust in our methodology, providing a comprehensive measure of an expert’s reliability. By evaluating both competence and intention, we aim to construct a nuanced trust score that accurately reflects the quality and dependability of each expert’s input, thereby enhancing the integrity and effectiveness of the decision-making process.
The process of modifying the opinions of conflicting experts based on trust relationships involves the calculation of the trust relationship matrix among experts. This is performed by leveraging the similarity matrix among experts to measure their similarity or consistency. The steps to implement this process are as follows:
  • Calculate the trust relationship matrix among experts based on the given similarity matrix. For each expert i, normalize their similarity vector to ensure it falls within the range of [0,1].
  • Using the normalized similarity vectors, construct the trust relationship matrix T M . Each element T M i j represents the degree of trust expert i has in expert j. The construction involves using the normalized similarity vectors as rows or columns of the trust relationship matrix.
  • With the trust relationship matrix in place, you can adjust conflicting opinions based on the trust scores assigned by each expert to other experts. This adjustment considers the level of trust one expert has in another, which can help in reconciling conflicting opinions and achieving consensus.
The trust relationship matrix helps in capturing the trustworthiness of each expert’s opinion regarding the others, and this trust-based approach can be used to facilitate the resolution of conflicts among experts.

4. Comparative Analysis between PLDS Algorithm and Existing Models

In response to the evolving challenges within the domain of sustainability decision-making, this study introduces the Probabilistic Linguistic Dempster–Shafer (PLDS) algorithm, a novel advancement aimed at enriching the Dempster–Shafer theory’s application by integrating linguistic preferences and trust relationships. A critical facet of our research lies in the meticulous comparative analysis conducted between the PLDS algorithm and the traditional Dempster–Shafer (D-S) algorithm. Our analytical approach employs a Monte Carlo method for simulating reasoning, generating large, normalized expert preference evaluation matrices. Through this method, we analyze the effectiveness and stability of the PLDS algorithm in handling conflicting evidence against the backdrop of the traditional D-S algorithm.
In the numerical analysis of the algorithm, the goal is to validate the relative effectiveness and stability of the algorithm proposed in this paper for handling conflicting evidence. The decision problem’s identification framework is represented as ξ = ξ ϕ ϕ = 1,2 , 3 . To ensure the effectiveness of the algorithm’s simulation reasoning, a Monte Carlo method is used to randomly generate 500 normalized expert preference evaluation matrices within the range [0,1].
Once these evaluation matrices are generated, you can calculate the consensus threshold to identify conflicting experts. Then, you can analyze and compare the changes in two types of algorithms, one before improvement and one after improvement, concerning the same evaluation criteria, in response to different sets of conflicting experts. This analysis will provide insights into the algorithm’s performance in handling conflicting evidence under various scenarios and help evaluate its effectiveness and stability.
Figure 1, Figure 2, Figure 3 and Figure 4 mentioned represent the performance of the attribute preference algorithm for some conflicting expert pairs after determining the consensus threshold:
  • The original D-S algorithm, represented by the dashed line, exhibits significant fluctuations, while the improved PLDS algorithm, represented by the solid line, shows a smoother trend. This indicates that the improved PLDS algorithm produces more stable results.
  • The original D-S algorithm generates noticeable peaks at certain time points, whereas the improved PLDS algorithm has fewer such peaks. This suggests that the improved PLDS algorithm is more reliable and less prone to producing misleading peaks.
  • The improved PLDS algorithm, represented by the solid line, is closer to a flat, horizontal line. This indicates that the improved PLDS algorithm aligns more with real-world situations where experts cannot provide definitive evaluations for certain attributes, leading to more logically consistent results that match real-world expectations.
  • The improved PLDS algorithm demonstrates greater stability in the results towards the end of the time series. This shows that the improved PLDS algorithm performs better when dealing with boundary cases.
In traditional Dempster’s rule of combination, when evidence within a recognition framework completely negates a proposition, the proposition is entirely negated in the final fusion result, regardless of the support from other pieces of evidence. The reason for synthesis conflicts lies in the extremeness and contradictions within the pieces of evidence themselves. Incorrect fusion results are due to issues in the input evidence. Therefore, pre-processing of evidence is required before evidence combination.
To address the aforementioned issue, this paper takes an approach from the perspective of inter-expert communication and uses linguistic preferences to determine each expert’s preference description for the options. It identifies conflicting experts using consensus thresholds among the expert group. Through mutual learning and communication among the experts, it establishes recognition and trust in each other’s expertise, deepening their understanding of the alternative options. It also addresses conflicting evidence by modifying it to obtain an acceptable unified evaluation matrix. The paper then applies the proposed method to reassign BPAs for conflicting experts and proceeds with the fusion.
To validate the robustness of the algorithms proposed for handling conflicting evidence, let us assume the recognition framework for the decision problem is η = η τ τ = 1,2 , 3 . To ensure the effectiveness of simulating reasoning, we randomly generate 500 normalized expert preference evaluation matrices within the range of [0,1] using the Monte Carlo method. We then modify the conflicting expert pairs based on the trust relationships formed through interaction and analyze the algorithm’s performance under different parameter variations before and after improvement, including the addition of Gaussian noise.
Figure 5, Figure 6, Figure 7 and Figure 8 show that, under different values of the consensus threshold (CT) and with the addition of noise, the improved algorithm shows significant differences in performance compared to the original algorithm(The blue color represents DS algorithm and the red one is PLDS algorithm):
  • As the consensus threshold (CT) increases, the number of conflicting expert pairs increases. This suggests that a higher CT results in a larger number of expert pairs being identified as conflicting.
  • With an increase in the magnitude of noise, the extremely low mass values for conflicting expert pairs gradually decrease. This indicates that, as external noise becomes more prominent, expert preference evaluations tend to become more conservative.
  • The improved algorithm, compared to the original algorithm, exhibits smoother behavior in dealing with oscillations in the results. This implies that the improved algorithm is less affected by noise, leading to more stable results.
These observations suggest that the improved algorithm is more robust and resilient to variations in consensus threshold and noise, making it a more reliable approach for handling conflicting expert evidence.
Enhancements Offered by PLDS Algorithm:
Improved Handling of Conflicting Evidence: The PLDS algorithm introduces an innovative mechanism for dealing with conflicting evidence, a notable limitation in the traditional D-S framework. Our numerical analysis illustrates that the PLDS algorithm significantly reduces the impact of conflict between different sources of evidence, thereby increasing the reliability of the decision-making process.
Enhanced Stability and Accuracy: The algorithmic numerical analysis highlights the superior stability and accuracy of the PLDS algorithm over the D-S algorithm. Through a series of simulations, we demonstrate how the PLDS algorithm maintains consistent performance even under conditions of high uncertainty and varied evidence quality.
Advanced Evidence Integration: Unlike the conventional approach, which struggles with integrating disparate pieces of evidence, the PLDS algorithm employs a probabilistic linguistic approach. This allows for a more nuanced and flexible integration of evidence, which is particularly beneficial when dealing with complex and heterogeneous data.

5. Case Application

To further substantiate the efficacy and applicability of the method proposed in this article, the introduction of practical cases for relevant analysis is imperative. Rural revitalization has persistently stood as a central concern of the nation. Commencing in the year 2023, Province A initiated a profound progression in the realm of rural revitalization. Within this overarching endeavor, village planning emerges as a pivotal undertaking. The alternative construction proposals, as stipulated below, must take into account the distinctive characteristics of local rural revitalization phases. It is paramount to gain profound insight into the evolutionary patterns of urban–rural relations and developmental trends while optimizing the spatial configuration of urban–rural development. This approach, in turn, will expedite the fusion of urban and rural development.
In accordance with the aforementioned stipulations, the five alternative proposals are delineated as follows:
  • Implementation of upper-level planning control lines, permanent basic farmland, ecological protection redlines, and the retained quantity of arable and woodland resources. The formulation of a heritage protection catalog is necessary, facilitating land layout that avoids unfavorable geological, flood-prone, and seismic areas.
  • Anchoring the spatial pattern of villages, characterized by mountains, waters, fields, lakes, forests, and grasslands. Advocating for region-specific rural residential architecture and meticulously adhering to the constraints concerning the scale and layout of homesteads. Respecting the natural geographic features of rural areas and elucidating the scope, scale, and chronology of tasks related to ecological restoration and comprehensive land remediation.
  • Defining the boundaries of construction land for villages, specifying the spatial distribution of water resources, mineral resources, waterbody shorelines, urban corridors, traffic corridors, and other critical control boundaries. Formulating a comprehensive master plan and related initiatives for environmental remediation in areas of paramount importance, such as areas of concentrated rural settlement.
  • Orchestrating village development in harmony with resource endowment, outlining the overall layout for village industry development, and establishing a seamless connection between village infrastructure and public service facilities, as well as transportation and urban infrastructure. Guiding and controlling the allocation of land for emerging rural formats, agricultural facilities, and collectively operated construction land.
  • Coordinating the development of linear projects and special endeavors in transportation, public facilities, military, strategic reserves, disaster prevention, and disaster reduction. Ensuring the provision of complementary service facilities for designated suburban parks and scenic tourism facilities.
To leverage the comprehensive algorithm presented in this article to simulate the decision-making optimization process, and contingent on actual circumstances, we invited twenty experts from diverse fields such as agriculture, rural affairs, natural resources, urban planning, and tourism. Their corresponding linguistic lexicon encompasses the following: S = { S 4 = Extremely poor, S 3 = Considerably poor, S 2 = Inferior, S 1 ) = Slightly subpar, S 0 = Average, S 1 = Somewhat good, S 2 = Superior, S 3 = Considerably excellent, S 4 = Exceptionally outstanding}. These twenty experts, each drawing upon their domain-specific knowledge, have expressed their evaluation preferences for the aforementioned five proposals as shown in Table A1 in Appendix A.
Utilizing the probabilistic linguistic set distance formulas (Equations (2) and (3)) and similarity measurement formulas (Equations (4) and (5)), we derive the similarity matrices I for experts, as shown in Table A2 in Appendix A.
Assessment of Expert Internal Consistency: Calculate the Intraclass Correlation Coefficient (ICC) with a significance level of α = 0.03, as determined by the Formula (6), and ICC ( M μ , M ν ) = 0.00047.
The internal consistency among experts, as reflected by the ICC, is notably less than 1, indicating significant discrepancies. Therefore, conflict resolution and associated steps are imperative. The consensus threshold for the expert group, as determined by Formula (7), is CT ( M μ ,     M ν ) = 0.9327.
By traversing the similarity matrix, conflicts among experts and their own weights are identified by Formulas (11)–(13), and the initial weights of conflict experts are shown in Table A4 in Appendix A.
Subsequent to interactive discussions and deepening their comprehension of the issue, if experts remain unwilling to amend their opinions, it becomes necessary to adjust the weights assigned to conflicting experts, as outlined by Formulas (14) and (15). To derive the revised weights for the respective conflicting experts, the new weights of conflict experts are as shown in Table A5 in Appendix A.
After taking into account the new weighting factors, the conflicting experts are subjected to processing using the Basic Probability Assignment (BPA) function as determined by the Formula (1), resulting in Table A6, named the BPA of conflict experts for proposals, in Appendix A.
Based on the Dempster–Shafer (D-S) evidence theory fusion algorithm, it is determined that the X 1 proposal is the optimal alternative for village planning.
Trust is delineated into two primary dimensions: competence and intention. This bifurcation allows for a more nuanced assessment of expert contributions, ensuring that our trust evaluations encapsulate both the competence and the intention of each expert’s input.
Competence refers to the technical expertise and knowledge an expert possesses in the relevant domain. It is quantified through an analysis of the expert’s historical contributions, qualifications, and recognition in their field, ensuring that the trust placed in their opinions is proportional to their proven expertise. Mathematically, we represent competence as C i j , where C i j measures the level of technical expertise of expert i as perceived by expert j. This measurement is derived from a combination of factors, including publication records, peer evaluations, and direct contributions to the field of study.
Intention reflects the perceived sincerity and objectivity of an expert’s contributions. It considers factors such as the consistency of the expert’s opinions over time and their alignment with established evidence, aiming to identify any potential biases. We quantify intention as I i j , where I i j assesses the reliability of expert i’s input from the perspective of expert j, incorporating historical data on the expert’s objectivity and consistency in judgment.
Combining these dimensions, the trust score T i j towards each expert is calculated as a weighted sum of competence and intention, reflecting both the expert’s knowledge and their reliability. Formally, we define the trust score as: T i j = α· C i j + β· I i j where α and β are weighting factors that balance the importance of competence versus intention in the decision-making context.
Upon gaining a more profound understanding of the proposal, and with a willingness to adjust their assessment opinions based on trust in peer experts, as outlined by Formulas (8)–(10), a new expert similarity matrix II has been obtained as shown in Table A3 in Appendix A.
To facilitate a visual comparison of the differences in expert similarity matrices before and after modifications, we now introduce heat maps to observe the alterations in experts’ opinions based on trust relationships, which is aimed at resolving conflicts and ultimately achieving the selection of the optimal solution.
The following conclusions can be drawn from the analysis of the heatmap data and its related changes in Figure 9:
  • Specific Locations and Values of Significantly High Difference Points: The most notable difference point is located between experts numbered (1, 19), with a difference value of 0.8231. This indicates a level of conflict in project evaluation between expert 1 and expert 19 that is significantly higher than that observed among other pairs of experts. Other significant difference points include (8, 17) and (1, 18), with difference values of 0.8 and 0.7923, respectively.
  • Distribution of Significantly High Difference Points: In the x-axis (representing experts from the first matrix), expert 1 is associated with three instances of significantly high differences, suggesting substantial disagreements in scoring with several other experts. Similarly, on the y-axis (representing experts from the second matrix), expert 19 is involved in four instances of significantly high differences, indicating notable scoring disparities with several other experts.
  • Potential Conflict Patterns Among Expert Pairs: The distribution of significantly high difference points reveals that certain experts, particularly experts 1 and 19, have markedly different evaluation standards or preferences between the two data sets. Their significant differences in multiple comparisons may indicate fundamental disagreements in certain evaluation dimensions. These significant difference points may represent “conflict expert pairs,” warranting special attention in subsequent research and analysis.
Through the improved Dempster fusion algorithm proposed in this paper, it is evident that the X 1 proposal emerges as the optimal alternative for village planning, as adjusted based on trust relationships.
The case application of the Dempster–Shafer (D-S) evidence theory in sustainable village planning in Province A has demonstrated substantial promise. The methodology, which involved the assessment of five distinct proposals, highlighted the X1 proposal due to its strong alignment with sustainability goals. The critical findings and data from this case application include:
(1)
Expert Consistency and Conflict Resolution: The initial Intraclass Correlation Coefficient (ICC) of 0.00047 pointed to significant discrepancies among expert opinions. This necessitated the implementation of conflict resolution measures and the adjustment of expert weights. The consensus threshold for the expert group was established at 0.9327, indicating the high level of agreement required for decision-making.
(2)
Weight Adjustments for Conflict Experts: The initial and revised weights of conflict experts clearly illustrate this strategy. For instance, expert O1′s weight for Proposal 1 was reduced from 0.29343539 to 0.17855816. This significant reduction in weight for experts who maintained their conflicting opinions ensures that the decision-making process minimizes the impact of intransigent views. The reduction in weights was consistent across all conflicting experts for various proposals, underscoring the algorithm’s capability to adaptively manage expert influence.
(3)
Basic Probability Assignment (BPA) Function Results: The BPA values for Proposal X1 ranged from 0.18 to 0.46 across different experts, highlighting the effectiveness of the D-S algorithm in achieving consensus among varying opinions. These values illustrate the algorithm’s ability to integrate diverse expert judgments into a cohesive decision-making framework.
(4)
Identification of Optimal Proposal: The X1 proposal, focused on upper-level planning control, ecological protection, and resource conservation, was determined as the most suitable for sustainable village development. This was evidenced by its consistently high BPA scores, which were reflective of its alignment with the sustainability goals of the project.
(5)
Expert Similarity Matrix and Trust Dynamics: The study utilized a new expert similarity matrix, adjusted based on trust relationships, to further validate the selection of the X1 proposal. This matrix was crucial in understanding the degree of agreement among experts and adjusting their evaluations accordingly. It underscored the importance of trust dynamics in achieving a consensus in complex decision-making scenarios.

6. Conclusions

This research marks a pivotal advancement in conflict measurement within Dempster–Shafer theory, introducing a sophisticated algorithm that integrates linguistic preferences with trust relationships for enhanced decision-making in sustainability. Our approach distinguishes itself by calculating inter-expert similarity and consensus thresholds, employing a dual-path adjustment to effectively manage expert opinion conflicts. The methodology’s strength lies in its ability to articulate a more precise representation of expert judgments, crucial for navigating the complexities of sustainability-related decisions. Through comprehensive numerical simulations and a case study on sustainable village planning, we have validated our approach’s superiority in reconciling conflicting evidence, demonstrating its potential to significantly improve decision-making accuracy and reliability.
Despite its innovative contributions, this study faces limitations, particularly in the scalability of our algorithm for larger datasets and its application breadth across diverse sustainability scenarios. The complexity of integrating linguistic preferences and trust relationships, while beneficial for accuracy, introduces challenges in computational efficiency and user accessibility. Moreover, our case study, focused on sustainable village planning, may not fully capture the broader applicability of our methodology to other domains of sustainability, suggesting a need for further testing and adaptation to ensure wider relevance.
Looking ahead, several avenues for future research emerge from our study. First, enhancing the algorithm’s scalability and computational efficiency will be crucial. This involves developing more streamlined methods for calculating inter-expert similarity and consensus, potentially through the integration of machine learning techniques to automate and refine these processes. Additionally, exploring the application of our methodology across a broader spectrum of sustainability challenges, from urban planning to resource management, will help in assessing its generalizability and effectiveness in varied contexts.
Further, the integration of more objective data sources alongside expert judgments could offer a balanced approach to decision-making, reducing potential biases and increasing the robustness of outcomes. Investigating advanced analytical techniques, such as deep learning, could provide insights into complex patterns within expert opinions, enhancing the decision-making framework’s predictive capabilities. Another promising direction is the exploration of novel conflict resolution mechanisms that can accommodate divergent viewpoints more equitably, fostering transparency and inclusivity in decision-making processes. This could include the development of new linguistic scales and trust models that reflect the nuances of expert interactions more accurately.
Finally, longitudinal studies tracking the implementation of decisions derived from our methodology could yield valuable feedback on its long-term impact on sustainability outcomes. Such research would not only validate the practical utility of our approach but also highlight areas for continuous improvement, ensuring that the methodology evolves in response to changing sustainability needs and challenges. By addressing these limitations and exploring these future research directions, we aim to refine and expand the applicability of our methodology, contributing to more informed and sustainable decision-making practices globally.

Author Contributions

Conceptualization, Q.L. (Qingmiao Liu) and Q.L. (Qiang Liu); methodology, Q.L. (Qingmiao Liu); software, Q.L. (Qingmiao Liu); validation, Q.L. (Qingmiao Liu) and M.W.; formal analysis, Q.L. (Qingmiao Liu); investigation, Q.L. (Qingmiao Liu); resources, Q.L. (Qingmiao Liu); data curation, Q.L. (Qingmiao Liu); writing—original draft preparation, Q.L. (Qingmiao Liu); writing—review and editing, Q.L. (Qingmiao Liu), Q.L. (Qiang Liu) and M.W.; visualization, Q.L. (Qingmiao Liu); supervision, Q.L. (Qiang Liu); project administration, Q.L. (Qiang Liu); funding acquisition, Q.L. (Qiang Liu). All authors have read and agreed to the published version of the manuscript.

Funding

The Shandong Province Social Science Planning Research Project (Grant No. 23CXWJ04), the National Natural Science Foundation of China (Grant No. 61403230), and the Natural Science Foundation of Shandong Province (Grant No. ZR2020MG011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Evaluation preferences of experts.
Table A1. Evaluation preferences of experts.
S 0 { S 3 ( 0.4 ) , S 4 ( 0.6 ) } { S 1 ( 0.9 ) , S 2 ( 0.1 ) } { S 1 ( 0.4 ) , S 2 ( 0.6 ) } { S 3 ( 0.6 ) , S 4 ( 0.4 ) } S 0 { S 4 ( 0.6 ) , S 3 ( 0.4 ) } S 3 0.4 , S 4 0.6 { S 1 ( 0.2 ) , S 0 ( 0.8 ) } { S 0 ( 0.2 ) , S 1 ( 0.8 ) }
S 0 { S 4 ( 1 ) } { S 3 ( 0.2 ) , S 4 ( 0.8 ) } { S 1 ( 0.7 ) , 0 ( 0.3 ) } S 0 { S 0 ( 0.5 ) , S 1 ( 0.5 ) } { S 4 ( 0.6 ) , S 3 ( 0.4 ) } { S 3 ( 0.8 ) , S 2 ( 0.2 ) }
E 2 : S 0 { S 3 ( 0.3 ) , S 2 ( 0.7 ) } { S 2 ( 0.9 ) , S 3 ( 0.1 ) } E 7 : S 0 { S 3 ( 0.3 ) , S 4 ( 0.7 ) } { S 3 ( 0.2 ) , S 2 ( 0.8 ) }
S 0 { S 4 ( 0.5 ) , S 3 ( 0.5 ) S 0 { S 2 ( 0.4 ) , S 3 ( 0.6 )
S 0 S 0
S 0 { S 3 ( 0.2 ) , S 2 ( 0.8 ) } { S 3 ( 0.7 ) , S 4 ( 0.3 ) } { S 0 ( 0.8 ) , S 1 ( 0.2 ) } { S 3 ( 0.3 ) , S 4 ( 0.7 ) } S 0 { S 0 ( 0.1 ) , S 4 ( 0.9 ) } { S 3 ( 0.4 ) , S 4 ( 0.6 ) } { S 2 ( 0.7 ) , S 1 ( 0.3 ) } { S 4 ( 0.3 ) , S 3 ( 0.7 ) }
S 0 { S 2 ( 0.5 ) , S 3 ( 0.5 ) } { S 0 ( 0.5 ) , S 1 ( 0.5 ) } { S 1 ( 0.7 ) , S 2 ( 0.3 ) } S 0 { S 3 ( 0.8 ) , S 2 ( 0.2 ) } { S 3 ( 0.4 ) , S 4 ( 0.6 ) } { S 2 ( 0.8 ) , S 8 ( 0.2 ) }
E 3 : S 0 { S 3 ( 0.2 ) , S 4 ( 0.8 ) } { S 0 ( 0.2 ) , S 1 ( 0.8 ) } E 8 : S 0 { S 4 ( 0.5 ) , S 3 ( 0.5 ) } { S 1 ( 0.7 ) , S 0 ( 0.3 ) }
S 0 { S 3 ( 0.9 ) , S 2 ( 0.1 ) S 0 { S 1 ( 0.8 ) , S 2 ( 0.2 )
S 0 S 0
S 0 { S 3 ( 0.8 ) , S 4 ( 0.2 ) } { S 1 ( 0.2 ) , S 2 ( 0.8 ) } { S 3 ( 0.6 ) , S 4 ( 0.4 ) } { S 4 ( 0.3 ) , S 3 ( 0.7 ) } S 0 { S 2 ( 0.8 ) , S 3 ( 0.2 ) } { S 2 ( 0.3 ) , S 1 ( 0.7 ) } { S 3 ( 0.4 ) , S 2 ( 0.6 ) } { S 0 ( 0.5 ) , S 1 ( 0.5 ) }
S 0 { S 2 ( 0.3 ) , S 3 ( 0.7 ) } { S 1 ( 0.8 ) , S 2 ( 0.2 ) } { S 3 ( 0.1 ) , S 4 ( 0.9 ) } S 0 { S 3 ( 0.4 ) , S 4 ( 0.6 ) } { S 3 ( 0.2 ) , S 2 ( 0.8 ) } { S 3 ( 0.6 ) , S 4 ( 0.4 ) }
E 4 : S 0 { S 2 ( 0.3 ) , S 1 ( 0.7 ) } { S 3 ( 0.4 ) , S 2 ( 0.6 ) } E 9 : S 0 { S 0 ( 0.3 ) , S 1 ( 0.7 ) } { S 0 ( 0.8 ) , S 1 ( 0.2 ) }
S 0 { S 1 ( 0.3 ) , S 0 ( 0.7 ) S 0 { S 1 ( 0.9 ) , S 2 ( 0.1 )
S 0 S 0
S 0 { S 1 ( 0.1 ) , S 0 ( 0.9 ) } { S 3 ( 0.8 ) , S 2 ( 0.2 ) } { S 1 ( 0.6 ) , S 2 ( 0.4 ) } { S 3 ( 0.5 ) , S 4 ( 0.5 ) } S 0 S 3 0.4 , S 4 0.6 { S 1 ( 0.3 ) , S 0 ( 0.7 ) } { S 3 ( 0.6 ) , S 4 ( 0.4 ) } S 3 0.4 , S 4 0.6
S 0 { S 3 ( 0.3 ) , S 4 ( 0.7 ) } { S 4 ( 0.6 ) , S 3 ( 0.4 ) } { S 3 ( 0.1 ) , S 4 ( 0.9 ) } S 0 { S 3 ( 0.4 ) , S 4 ( 0.6 ) } { S 3 ( 0.6 ) , S 2 ( 0.4 ) } { S 3 ( 0.5 ) , S 4 ( 0.5 ) }
E 5 : S 0 { S 2 ( 0.3 ) , S 1 ( 0.7 ) } { S 3 ( 0.4 ) , S 2 ( 0.6 ) } E 10 : S 0 { S 1 ( 0.4 , S 2 ( 0.6 ) } { S 3 ( 0.2 ) , S 4 ( 0.8 ) }
S 0 { S 1 ( 0.4 ) , S 0 ( 0.6 ) S 0 { S 1 ( 0.7 ) , S 2 ( 0.3 )
S 0 S 0
S 0 { S 3 ( 0.7 ) , S 4 ( 0.3 ) } { S 1 ( 0.5 ) , S 0 ( 0.5 ) } { S 1 ( 0.2 ) , S 2 ( 0.8 ) } { S 3 ( 0.4 ) , S 4 ( 0.6 ) } S 0 { S 3 ( 0.6 ) , S 4 ( 0.4 ) } { S 3 ( 0.2 ) , S 4 ( 0.8 ) } { S 2 ( 0.4 ) , S 3 ( 0.6 ) } { S 3 ( 0.5 , S 4 ( 0.5 ) }
S 0 { S 3 ( 0.7 ) , S 2 ( 0.3 ) } { S 1 ( 0.9 ) , S 0 ( 0.1 ) } { S 2 ( 0.4 ) , S 3 ( 0.6 ) } S 0 { S 3 ( 0.3 ) , S 2 ( 0.7 ) } { S 4 ( 0.6 ) , S 3 ( 0.4 ) } { S 3 ( 0.9 ) , S 2 ( 0.1 ) }
E 11 : S 0 { S 3 ( 0.5 ) , S 2 ( 0.5 ) } { S 2 ( 0.1 ) , S 3 ( 0.9 ) } E 16 : S 0 { S 2 ( 0.5 ) , S 1 ( 0.5 ) } { S 3 ( 0.8 ) , S 4 ( 0.2 ) }
S 0 { S 2 ( 0.8 ) , S 3 ( 0.2 ) S 0 { S 2 ( 0.5 , S 3 ( 0.5 )
S 0 S 0
S 0 { S 3 ( 0.6 ) , S 4 ( 0.4 ) } { S 2 ( 0.2 ) , S 1 ( 0.8 ) } { S 1 ( 0.3 ) , S 0 ( 0.7 ) } { S 3 ( 0.6 ) , S 4 ( 0.4 ) } S 0 { S 1 ( 0.1 ) , S 0 ( 0.9 ) } { S 0 ( 0.4 ) , S 1 ( 0.6 ) } { S 2 ( 0.7 ) , S 1 ( 0.3 ) } { S 1 ( 0.6 ) , S 0 ( 0.4 ) }
S 0 { S 1 ( 0.3 ) , S 0 ( 0.7 ) } { S 3 ( 0.9 ) , S 4 ( 0.1 ) } { S 1 ( 0.8 , S 2 ( 0.2 ) } S 0 { S 3 ( 0.1 ) , S 4 ( 0.9 ) } { S 2 ( 0.5 ) , S 1 ( 0.5 ) } { S 2 ( 0.8 ) , S 3 ( 0.2 ) }
E 12 : S 0 { S 2 ( 0.4 ) , S 1 ( 0.6 ) } { S 3 ( 0.5 ) , S 4 ( 0.5 ) } E 17 : S 0 { S 1 ( 0.6 ) , S 2 ( 0.4 ) } { S 3 ( 0.3 ) , S 2 ( 0.7 ) }
S 0 { S 4 ( 0.7 ) , S 3 ( 0.3 ) S 0 { S 2 ( 0.4 ) , S 1 ( 0.6 )
S 0 S 0
S 0 { S 2 ( 0.3 ) , S 1 ( 0.7 ) } { S 1 ( 0.5 ) , S 0 ( 0.5 ) } { S 2 ( 0.8 ) , S 3 ( 0.2 ) } { S 3 ( 0.2 ) , S 2 ( 0.8 ) } S 0 { S 2 ( 0.4 ) , S 1 ( 0.6 ) } { S 1 ( 0.3 ) , S 0 ( 0.7 ) } { S 2 ( 0.7 ) , S 1 ( 0.3 ) } { S 3 ( 0.6 ) , S 2 ( 0.4 ) }
S 0 { S 4 ( 0.5 ) , S 3 ( 0.5 ) } { S 1 ( 0.5 ) , S 0 ( 0.5 ) } { S 1 ( 0.8 ) , S 0 ( 0.2 ) } S 0 { S 0 ( 0.3 ) , S 1 ( 0.7 ) } { S 4 ( 0.6 , S 3 ( 0.4 ) } { S 1 ( 0.8 ) , S 2 ( 0.2 ) }
E 13 : S 0 { S 1 ( 0.4 ) , S 2 ( 0.6 ) } { S 1 ( 0.4 ) , S 0 ( 0.6 ) } E 18 : S 0 { S 4 ( 0.5 ) , S 3 ( 0.5 ) } { S 3 ( 0.4 ) , S 4 ( 0.6 ) }
S 0 { S 0 ( 0.5 ) , S 1 ( 0.5 ) S 0 { S 3 ( 0.5 ) , S 4 ( 0.5 )
S 0 S 0
S 0 { S 3 ( 0.3 ) , S 4 ( 0.7 ) } { S 0 ( 0.8 ) , S 1 ( 0.2 ) } { S 3 ( 0.3 ) , S 2 ( 0.7 ) } { S 1 ( 0.3 ) , S 0 ( 0.7 ) } S 0 { S 3 ( 0.7 ) , S 4 ( 0.3 ) } { S 3 ( 0.3 ) , S 4 ( 0.7 ) } { S 2 ( 0.2 ) , S 1 ( 0.8 ) } { S 4 ( 0.7 ) , S 3 ( 0.3 ) }
S 0 { S 4 ( 0.2 ) , S 3 ( 0.8 ) } { S 2 ( 0.6 ) , S 1 ( 0.4 ) } { S 4 ( 0.5 ) , S 3 ( 0.5 ) } S 0 { S 1 ( 0.4 ) , S 2 ( 0.6 ) } { S 3 ( 0.2 ) , S 2 ( 0.8 ) } { S 3 ( 0.5 ) , S 4 ( 0.5 ) }
E 14 : S 0 { S 2 ( 0.3 ) , S 1 ( 0.7 ) } { S 2 ( 0.4 ) , S 3 ( 0.6 ) } E 19 : S 0 { S 1 ( 0.8 ) , S 2 ( 0.2 ) } { S 1 ( 0.6 ) , S 0 ( 0.4 ) }
S 0 { S 1 ( 0.8 ) , S 0 ( 0.2 ) S 0 { S 3 ( 0.1 ) , S 4 ( 0.9 )
S 0 S 0
S 0 { S 2 ( 0.6 ) , S 3 ( 0.4 ) } { S 1 ( 0.4 ) , S 0 ( 0.6 ) } { S 3 ( 0.5 ) , S 4 ( 0.5 ) } { S 2 ( 0.9 ) , S 3 ( 0.1 ) } S 0 { S 2 ( 0.7 ) , S 3 ( 0.3 ) } { S 2 ( 0.5 ) , S 1 ( 0.5 ) } { S 4 ( 0.5 ) , S 3 ( 0.5 ) } { S 2 ( 0.2 ) , S 1 ( 0.8 ) }
S 0 { S 3 ( 0.7 ) , S 4 ( 0.3 ) } { S 3 ( 0.1 ) , S 4 ( 0.9 ) } { S 3 ( 0.2 ) , S 2 ( 0.8 ) } S 0 { S 2 ( 0.6 ) , S 1 ( 0.4 ) } { S 1 ( 0.7 ) , S 2 ( 0.3 ) } { S 1 ( 0.7 ) , S 0 ( 0.3 ) }
E 15 : S 0 { S 2 ( 0.5 ) , S 1 ( 0.5 ) } { S 2 ( 0.6 ) , S 1 ( 0.4 ) } E 20 : S 0 { S 2 ( 0.1 ) , S 1 ( 0.9 ) } { S 3 ( 0.4 ) , S 4 ( 0.6 ) }
S 0 { S 1 ( 0.4 ) , S 2 ( 0.6 ) S 0 { S 3 ( 0.6 ) , S 4 ( 0.4 )
S 0 S 0
Table A2. Similarity matrices I.
Table A2. Similarity matrices I.
10.8560.9330.7170.8160.9050.9160.0170.7460.8560.8550.7180.8750.7840.8660.9880.7170.0740.0330.643
10.8510.9570.7170.8360.6840.6770.4550.9650.7390.7920.8360.7310.8550.8630.9430.8460.7990.627
10.8920.9370.9850.8910.9030.8330.9250.8680.9110.9410.7770.8570.9630.8880.7950.7520.798
10.9290.8790.7510.9330.9370.7290.7360.9390.8590.6450.8020.7910.9880.7710.8710.669
10.9610.7970.9760.8130.8590.9210.9680.9130.8690.7690.8610.8820.8510.9170.858
10.8950.9390.7810.9220.9240.9390.9050.8220.8380.9250.8660.8310.7990.841
10.7430.7750.7180.8450.7390.7180.6520.9710.8850.7880.9180.6880.503
10.8280.8390.8590.9980.8520.7760.7130.7730.0980.8460.88330.809
10.9920.8320.8760.8040.8330.8750.8040.9450.4520.0840.786
10.8210.8490.9190.8070.6150.8890.6990.9290.9590.931
10.8350.7990.9390.7570.8410.6860.8060.0520.867
10.8590.7490.7120.7780.9110.7440.6560.894
10.8240.6980.9360.8140.8510.7320.734
10.5590.7910.0350.8650.8030.859
10.8520.8460.9410.7110.596
10.7830.8240.8710.691
10.8920.0340.596
10.6530.402
10.716
1
Table A3. Similarity matrices II.
Table A3. Similarity matrices II.
10.8460.9330.7170.8160.9050.9160.7170.7460.8560.8550.7180.8750.7840.8660.9880.7170.8660.8560.674
10.8330.9370.7130.8810.6750.6680.4520.9920.7320.7760.8040.7330.8750.8040.9450.8750.8250.652
10.8920.9370.9850.8910.9030.7950.9250.8680.9110.9410.7770.8570.9630.8880.8570.7950.795
10.9290.8790.7510.9330.9710.7290.7360.9390.8590.6450.8020.7910.9860.8020.8560.671
10.9610.7970.9760.8510.8590.9210.9680.9130.8690.7690.8610.8820.8690.9170.851
10.8950.9390.8310.9220.9240.9390.9050.8220.8380.9250.8660.8380.8360.831
10.7430.7520.7180.8450.7390.7180.6520.9710.8850.7880.9710.6850.518
10.8460.8390.8590.9980.8520.7760.7130.7730.8980.8130.8770.846
10.9290.8060.8440.8510.8650.8010.8240.8920.4010.8460.799
10.8210.8490.9190.8070.6150.8890.6990.9150.9650.929
10.8350.7990.9390.7570.8410.6860.7570.7390.806
10.8590.7490.7120.7780.9110.7120.6920.844
10.8240.6980.9360.8140.8980.7360.751
10.5590.7910.7470.8590.8310.865
10.8520.8460.9790.7550.581
10.7830.8520.8630.724
10.8460.8040.592
10.5550.401
10.846
1
Table A4. The initial weights of conflict experts.
Table A4. The initial weights of conflict experts.
Conflict Expert Weight O 1 O 8 O 9 O 11 O 14 O 17 O 18 O 19
Proposal 10.2930.3010.2940.2950.3340.2930.2960.305
Proposal 20.1160.1210.1150.1170.1180.1140.1140.116
Proposal 30.2070.1870.2060.1980.2020.2080.1930.188
Proposal 40.1680.1780.1690.1750.1770.1690.1810.177
Proposal 50.2160.2130.2160.2150.1690.2160.2160.214
Table A5. The new weights of conflict experts.
Table A5. The new weights of conflict experts.
Conflict Expert Reweight O 1 O 8 O 9 O 11 O 14 O 17 O 18 O 19
Proposal 10.1790.2370.1840.2730.2960.2350.2940.215
Proposal 20.2350.0580.1290.0980.1160.1190.1950.146
Proposal 30.0780.2540.2440.1490.1920.2060.0610.171
Proposal 40.2710.1760.2160.1880.1810.1850.1390.212
Proposal 50.2370.2750.2240.2920.2150.2550.3010.256
Table A6. The BPA of conflict experts for proposals.
Table A6. The BPA of conflict experts for proposals.
BPA m 1 m 2 m 3 m 4 m 5 m 6 …… m 12 m 13 m 14 m 15 m 16 m 17 m 18 m 19 m 20
SOL
X 1 0.270.180.320.350.290.35……0.270.180.150.330.250.410.240.290.21
X 2 0.180.240.070.040.120.09……0.090.080.260.120.140.010.110.190.15
X 3 0.290.080.240.050.160.24……0.150.250.270.20.270.040.210.060.17
X 4 0.210.270.20.260.130.13……0.190.250.030.180.240.290.190.140.22
X 5 0.050.230.170.30.30.19……0.30.240.290.170.10.250.250.320.25

References

  1. Bauer, M. Approximation algorithms and decision making in the Dempster-Shafer theory of evidence—An empirical study. Int. J. Approx. Reason. 1997, 17, 217–237. [Google Scholar] [CrossRef]
  2. Shafer, G. Dempster-shafer theory. Encycl. Artif. Intell. 1992, 1, 330–331. [Google Scholar]
  3. Fei, L.; Deng, Y.; Hu, Y. DS-VIKOR: A new multi-criteria decision-making method for supplier selection. Int. J. Fuzzy Syst. 2019, 21, 157–175. [Google Scholar] [CrossRef]
  4. Fu, Q.; Song, Y.; Fan, C.-L.; Lei, L.; Wang, X. Evidential model for intuitionistic fuzzy multi-attribute group decision making. Soft Comput. 2020, 24, 7615–7635. [Google Scholar] [CrossRef]
  5. Li, Y.-L.; Yan, Q.-F.; Wu, J.-W. Multi-attribute group decision making method based on evidence theory. In Proceedings of the 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017; pp. 3043–3047. [Google Scholar]
  6. Ivanov, V.K.; Vinogradova, N.; Palyukh, B.V.; Sotnikov, A.N. Current trends and applications of dempster-shafer theory. arXiv 2021, arXiv:2103.15592. [Google Scholar]
  7. Gao, X.; Pan, L.; Deng, Y. A generalized divergence of information volume and its applications. Eng. Appl. Artif. Intell. 2022, 108, 104584. [Google Scholar] [CrossRef]
  8. Fei, L.; Wang, Y. An optimization model for rescuer assignments under an uncertain environment by using Dempster–Shafer theory. Knowl.-Based Syst. 2022, 255, 109680. [Google Scholar] [CrossRef]
  9. Li, W.; Bao, J.; Fu, X.; Fortino, G.; Galzarano, S. Human postures recognition based on DS evidence theory and multi-sensor data fusion. In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), Ottawa, ON, Canada, 13–16 May 2012; pp. 912–917. [Google Scholar]
  10. Lu, C.; Li, Y.; Ma, M.; Li, N. A hybrid NIDS model using artificial neural network and DS evidence. Int. J. Digit. Crime Forensics (IJDCF) 2016, 8, 37–50. [Google Scholar] [CrossRef]
  11. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 2015, 521, 452–459. [Google Scholar] [CrossRef]
  12. Tong, Z.; Xu, P.; Denoeux, T. An evidential classifier based on Dempster-Shafer theory and deep learning. Neurocomputing 2021, 450, 275–293. [Google Scholar] [CrossRef]
  13. Pan, Y.; Zhang, L. Roles of artificial intelligence in construction engineering and management: A critical review and future trends. Autom. Constr. 2021, 122, 103517. [Google Scholar] [CrossRef]
  14. Çavdar, T.; Ebrahimpour, N.; Kakız, M.T.; Günay, F.B. Decision-making for the anomalies in IIoTs based on 1D convolutional neural networks and Dempster–Shafer theory (DS-1DCNN). J. Supercomput. 2023, 79, 1683–1704. [Google Scholar] [CrossRef]
  15. Ishikiriyama, C.S.; Miro, D.; Gomes, C.F.S. Text mining business intelligence: A small sample of what words can say. Procedia Comput. Sci. 2015, 55, 261–267. [Google Scholar] [CrossRef]
  16. Li, H.-D.; Zhang, J.; Xiao, L.; Li, H.-P.; Feng, Y. Application of DS evidence theory in combined price forecasting. In Proceedings of the 2008 Third International Conference on Electric Utility Deregulation and Restructuring and Power Technologies, Nanjing, China, 6–9 April 2008; pp. 1025–1029. [Google Scholar]
  17. Xiao, Z.; Yang, X.; Pang, Y.; Dang, X. The prediction for listed companies’ financial distress by using multiple prediction methods with rough set and Dempster–Shafer evidence theory. Knowl.-Based Syst. 2012, 26, 196–206. [Google Scholar] [CrossRef]
  18. Chen, B.; Wang, J.; Chen, S. Prediction of pulsed GTAW penetration status based on BP neural network and DS evidence theory information fusion. Int. J. Adv. Manuf. Technol. 2010, 48, 83–94. [Google Scholar] [CrossRef]
  19. Zhan, T.; Xiao, F. A fast evidential approach for stock forecasting. Int. J. Intell. Syst. 2021, 36, 7544–7562. [Google Scholar] [CrossRef]
  20. Xiao, F. CEQD: A complex mass function to predict interference effects. IEEE Trans. Cybern. 2021, 52, 7402–7414. [Google Scholar] [CrossRef]
  21. Salehy, N.; Ökten, G. Dempster-Shafer theory for stock selection. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 12–16 July 2021; pp. 1729–1734. [Google Scholar]
  22. Liu, P.; Hendalianpour, A.; Hamzehlou, M.; Feylizadeh, M. Cost reduction of inventory-production-system in multi-echelon supply chain using game theory and fuzzy demand forecasting. Int. J. Fuzzy Syst. 2022, 24, 1793–1813. [Google Scholar] [CrossRef]
  23. Zadeh, L.A. The concept of a linguistic variable and its application to approximate resoning. Inf. Sci. 1975, 50, 199–249. [Google Scholar] [CrossRef]
  24. Jin, F.; Cao, M.; Liu, J.; Martínez, L.; Chen, H. Consistency and trust relationship-driven social network group decision-making method with probabilistic linguistic information. Appl. Soft Comput. 2021, 103, 107170. [Google Scholar] [CrossRef]
  25. Li, P.; Wei, C. An emergency decision-making method based on D-S evidence theory for probabilistic linguistic term sets. Int. J. Disaster Risk Reduct. 2019, 37, 101178. [Google Scholar] [CrossRef]
  26. Ye, F.; Chen, J.; Li, Y. Improvement of DS evidence theory for multi-sensor conflicting information. Symmetry 2017, 9, 69. [Google Scholar] [CrossRef]
  27. Huang, J.; You, X.-Y.; Liu, H.-C.; Si, S.-L. New approach for quality function deployment based on proportional hesitant fuzzy linguistic term sets and prospect theory. Int. J. Prod. Res. 2019, 57, 1283–1299. [Google Scholar] [CrossRef]
  28. Pang, Q.; Wang, H.; Xu, Z. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  29. Feng, R.; Xu, X.; Zhou, X.; Wan, J. A trust evaluation algorithm for wireless sensor networks based on node behaviors and ds evidence theory. Sensors 2011, 11, 1345–1360. [Google Scholar] [CrossRef] [PubMed]
  30. Jin, C.; Wang, H.; Xu, Z. Uncertain Probabilistic Linguistic Term Sets in Group Decision Making. Int. J. Fuzzy Syst. 2019, 21, 1241–1258. [Google Scholar] [CrossRef]
  31. Zhao, G.; Chen, A.; Lu, G.; Liu, W. Data fusion algorithm based on fuzzy sets and DS theory of evidence. Tsinghua Sci. Technol. 2019, 25, 12–19. [Google Scholar] [CrossRef]
  32. Zeng, D.; Xu, J.; Xu, G. Data Fusion for Traffic Incident Detector Using DS Evidence Theory with Probabilistic SVMs. J. Comput. 2008, 3, 36–43. [Google Scholar] [CrossRef]
  33. Zhang, W.; Ji, X.; Yang, Y.; Chen, J.; Gao, Z.; Qiu, X. Data fusion method based on improved DS evidence theory. In Proceedings of the 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Shanghai, China, 15–17 January 2018; pp. 760–766. [Google Scholar]
  34. Sun, R.; Huang, H.-Z.; Miao, Q. Improved information fusion approach based on DS evidence theory. J. Mech. Sci. Technol. 2008, 22, 2417–2425. [Google Scholar] [CrossRef]
  35. He, J.; Ma, C.; Shi, Y. Multi-effect-quantity fusion model of high arch dam based on improved DS evidence theory. Geomat. Inf. Sci. Wuhan Univ. 2012, 37, 1397–1400. [Google Scholar]
  36. Zhao, K.; Sun, R.; Li, L.; Hou, M.; Yuan, G.; Sun, R. An optimal evidential data fusion algorithm based on the new divergence measure of basic probability assignment. Soft Comput. 2021, 25, 11449–11457. [Google Scholar] [CrossRef]
  37. Zhao, K.; Li, L.; Chen, Z.; Sun, R.; Yuan, G.; Li, J. A survey: Optimization and applications of evidence fusion algorithm based on Dempster–Shafer theory. Appl. Soft Comput. 2022, 124, 109075. [Google Scholar] [CrossRef]
  38. Liu, Z.; Xiao, G.; Liu, H.; Wei, H. Multi-sensor measurement and data fusion. IEEE Instrum. Meas. Mag. 2022, 25, 28–36. [Google Scholar] [CrossRef]
  39. Zhu, C.; Xiao, F. A belief Hellinger distance for D–S evidence theory and its application in pattern recognition. Eng. Appl. Artif. Intell. 2021, 106, 104452. [Google Scholar] [CrossRef]
  40. Cios, K.J.; Freasier, R.; Goodenday, L.S.; Andrews, L. An expert system for diagnosis of coronary artery stenosis based on 201TI scintigrams using the Dempster-Shafer theory of evidence. Bioinformatics 1990, 6, 333–342. [Google Scholar] [CrossRef] [PubMed]
  41. Gobet, F. A pattern-recognition theory of search in expert problem solving. Think. Reason. 1997, 3, 291–313. [Google Scholar] [CrossRef]
  42. Luo, H.; Yang, S.-L.; Hu, X.-J.; Hu, X.-X. Agent oriented intelligent fault diagnosis system using evidence theory. Expert Syst. Appl. 2012, 39, 2524–2531. [Google Scholar] [CrossRef]
  43. Yang, H.; Hassan, S.G.; Wang, L.; Li, D. Fault diagnosis method for water quality monitoring and control equipment in aquaculture based on multiple SVM combined with DS evidence theory. Comput. Electron. Agric. 2017, 141, 96–108. [Google Scholar] [CrossRef]
  44. Hossain, S.; Zahid Hasan, M.; Patwary, M.J.; Uddin, M.S. An expert system to determine systemic lupus erythematosus under uncertainty. In Proceedings of the International Joint Conference on Advances in Computational Intelligence: IJCACI 2020, Bangladesh, India, 20–21 November 2020; pp. 117–130. [Google Scholar]
  45. Li, B.; Pang, F.-W. An approach of vessel collision risk assessment based on the D–S evidence theory. Ocean Eng. 2013, 74, 16–21. [Google Scholar] [CrossRef]
  46. Jiao, Z.; Gong, H.; Wang, Y. A DS evidence theory-based relay protection system hidden failures detection method in smart grid. IEEE Trans. Smart Grid 2016, 9, 2118–2126. [Google Scholar] [CrossRef]
  47. Han, Y.; Chen, Q.; Wang, J.-X. An enhanced DS theory cooperative spectrum sensing algorithm against SSDF attack. In Proceedings of the 2012 IEEE 75th Vehicular Technology Conference (VTC Spring), Yokohama, Japan, 6–9 May 2012; pp. 1–5. [Google Scholar]
  48. Yu, J.; Hu, M.; Wang, P. Evaluation and reliability analysis of network security risk factors based on DS evidence theory. J. Intell. Fuzzy Syst. 2018, 34, 861–869. [Google Scholar] [CrossRef]
  49. Sun, L.; Srivastava, R.P.; Mock, T.J. An information systems security risk assessment model under the Dempster-Shafer theory of belief functions. J. Manag. Inf. Syst. 2006, 22, 109–142. [Google Scholar] [CrossRef]
  50. Wang, C.; Zhang, Y. Network security situation evaluation based on modified DS evidence theory. Wuhan Univ. J. Nat. Sci. 2014, 19, 409–416. [Google Scholar] [CrossRef]
  51. Mu, C.; Li, X.; Huang, H.; Tian, S. Online risk assessment of intrusion scenarios using DS evidence theory. In Proceedings of the Computer Security-ESORICS 2008: 13th European Symposium on Research in Computer Security, Málaga, Spain, 6–8 October 2008; pp. 35–48. [Google Scholar]
  52. Ellison, C.G.; Levin, J.S. The religion-health connection: Evidence, theory, and future directions. Health Educ. Behav. 1998, 25, 700–720. [Google Scholar] [CrossRef] [PubMed]
  53. Bloch, I. Some aspects of Dempster-Shafer evidence theory for classification of multi-modality medical images taking partial volume effect into account. Pattern Recognit. Lett. 1996, 17, 905–919. [Google Scholar] [CrossRef]
  54. Jiang, Y.; Wang, Z.; Zhang, Y.; Chen, N. Health diagnosis of marine engine room equipment based on BP and DS evidence theory. Vibroeng. Procedia 2022, 45, 15–21. [Google Scholar] [CrossRef]
  55. Fang, R.; Liao, H.; Mardani, A. How to aggregate uncertain and incomplete cognitive evaluation information in lung cancer treatment plan selection? A method based on Dempster-Shafer theory. Inf. Sci. 2022, 603, 222–243. [Google Scholar] [CrossRef]
Figure 1. One and four synthesize mass function. Note: Each scatter point in the scatterplot represents the mass value of an attribute, and by comparing the mass value curves computed by the two methods, it is possible to understand the differences between them when dealing with conflicting expert evidence.
Figure 1. One and four synthesize mass function. Note: Each scatter point in the scatterplot represents the mass value of an attribute, and by comparing the mass value curves computed by the two methods, it is possible to understand the differences between them when dealing with conflicting expert evidence.
Sustainability 16 02288 g001
Figure 2. Three and five synthesize mass function.
Figure 2. Three and five synthesize mass function.
Sustainability 16 02288 g002
Figure 3. Four and seven synthesize mass function.
Figure 3. Four and seven synthesize mass function.
Sustainability 16 02288 g003
Figure 4. Six and eight synthesize mass function.
Figure 4. Six and eight synthesize mass function.
Sustainability 16 02288 g004
Figure 5. Comparison between DS and PLDS with noise (CT = 0.65).
Figure 5. Comparison between DS and PLDS with noise (CT = 0.65).
Sustainability 16 02288 g005
Figure 6. Comparison between DS and PLDS with noise (CT = 0.75).
Figure 6. Comparison between DS and PLDS with noise (CT = 0.75).
Sustainability 16 02288 g006
Figure 7. Comparison between DS and PLDS with noise (CT = 0.85).
Figure 7. Comparison between DS and PLDS with noise (CT = 0.85).
Sustainability 16 02288 g007
Figure 8. Comparison between DS and PLDS with noise (CT = 0.95).
Figure 8. Comparison between DS and PLDS with noise (CT = 0.95).
Sustainability 16 02288 g008
Figure 9. Heatmap of expert matrix comparison based on trust relationships.
Figure 9. Heatmap of expert matrix comparison based on trust relationships.
Sustainability 16 02288 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Q.; Liu, Q.; Wang, M. Sustainable Decision-Making Enhancement: Trust and Linguistic-Enhanced Conflict Measurement in Evidence Theory. Sustainability 2024, 16, 2288. https://doi.org/10.3390/su16062288

AMA Style

Liu Q, Liu Q, Wang M. Sustainable Decision-Making Enhancement: Trust and Linguistic-Enhanced Conflict Measurement in Evidence Theory. Sustainability. 2024; 16(6):2288. https://doi.org/10.3390/su16062288

Chicago/Turabian Style

Liu, Qiang, Qingmiao Liu, and Minhuan Wang. 2024. "Sustainable Decision-Making Enhancement: Trust and Linguistic-Enhanced Conflict Measurement in Evidence Theory" Sustainability 16, no. 6: 2288. https://doi.org/10.3390/su16062288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop