Next Article in Journal
On Ideals of Submonoids of Power Monoids
Previous Article in Journal
Mellin Transform of Weierstrass Zeta Function and Integral Representations of Some Lambert Series
Previous Article in Special Issue
AI-Driven LOPCOW-AROMAN Framework and Topological Data Analysis Using Circular Intuitionistic Fuzzy Information: Healthcare Supply Chain Innovation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Possible Degree-Based D–S Evidence Theory Method for Ranking New Energy Vehicles Based on Online Customer Reviews and Probabilistic Linguistic Term Sets

School of Mathematics and Statistics, Guilin University of Technology, Guilin 541002, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(4), 583; https://doi.org/10.3390/math13040583
Submission received: 16 January 2025 / Revised: 27 January 2025 / Accepted: 30 January 2025 / Published: 10 February 2025
(This article belongs to the Special Issue Advances in Fuzzy Decision Theory and Applications, 2nd Edition)

Abstract

:
As people’s environment awareness increases and the “double carbon” policy is implemented, the new energy vehicle (NEV) becomes a popular form of transformation and more and more car manufacturers start to produce NEVs. Thus, how to choose an appropriate type of NEVs from many brands is an interesting topic for customers, which can be regarded as a multiple-attribute decision-making (MADM) problem because customers often concern several different factors such as the price, endurance mileage, appearance and so on. This paper proposes a possible degree-based D–S evidence theory method for helping customers select a proper type of NEVs in the probabilistic linguistic environment. In order to derive decision information reflecting customer demands, online customer reviews (OCRs) are crawled from multiple websites and converted into five-granularity probabilistic linguistic term sets (PLTSs). Afterwards, by maximizing deviation and minimizing the information uncertainty, a bi-objective programming model is built to determine attribute weights. Furthermore, a possible degree-based D–S evidence theory method in the PLTS environment is proposed to rank alternatives in each website. For fusing these ranking results, a 0–1 programming model is set up by maximizing the consensus between the comprehensive ranking and individual ones in each website. At length, a case study of selecting a type of NEVs is provided to show the application and validity of the proposed method.

1. Introduction

With the rapid development of our national economy and people’s living standards improving, cars have become a common transportation tool for many people. However, fuel vehicles emit a large amount of exhaust gas, which causes air pollution [1,2] and affects the air quality [3,4]. In order to control the environment pollution and decrease the carbon emission, the China government formulates a series of policy and reform measures including collecting green tax and recommending the new energy vehicles (NEVs). Supported by the new policy, more and more car manufacturers shift their focus from fuel vehicles to NEVs, such as Chuanqi, Changcheng and Jili. Meanwhile, some NEV manufactures, like BYD and Tesla, are booming. Thus, more and more brands of NEVs emerge in the market and make consumers puzzled for selecting an appropriate car. Generally, when consumers seek to purchase a NEV, they often evaluate and compare candidate vehicles in multiple ways consisting of price, outlook, interior, intelligence and so on. Therefore, the selection of an optimal NEV among a collection of candidates can be considered as a typical multi-attribute decision-making (MADM) problem.
As the social media and mobile devices are more and more popular, many customers prefer to share their reviews and opinions on social media platforms. These online reviews are very helpful for potential purchasers because they provide much valuable information including purchasing experiences and evaluations on many indices about NEVs. Generally, an online review comprises a total rating and the text description. Compared with the rating, the text description may express more abundant evaluations and more explicitly reflect opinions of a consumer regarding his/her car. The sentiment technique [5,6,7] is an effective tool for handling and transforming online reviews into proper decision making information such as intuitionistic fuzzy sets, hesitant intuitionistic sets and q-rung orthopair fuzzy numbers. Nevertheless, these forms only describe the proportions of positive and negatives, but are unable to express the strength of positive and negative sentiments. To make up for this limitation, this study converts the sentiment analysis results into five-granularity PLTSs which can indicate different sentiment orientations with probability and their strength.
After transforming online reviews into PLTSs, the decision matrix is formed. In the decision-making process, the attribute weights play an important role and are often determined by experts [8], minimizing similarity degree [9], maximizing deviation [10] or other optimization models [11] in most MADM methods. However, it is hard for experts to assign reasonable attribute weights and the assigned weights have some subjective randomness. Although the minimizing similarity degree method and the maximizing deviation method objectively derive attribute weights based on decision information, the former only considers the similarity of each attribute, and the latter only considers the differences between alternatives on each attribute, both of which ignore the uncertainty of the decision information obtained from the sentiment analysis results. Therefore, by combining the differences and uncertainty of decision information, this paper constructs a bi-objective programming model to acquire attribute weights objectively. Furthermore, to decrease the uncertainty and fuse the decision information from multiple aspects, alternatives are ranked with the D–S evidence theory method which has been widely used in distinct fuzzy environments [12,13,14] and different fields [15,16,17].
According to this background, this paper presents a possible degree-based D–S evidence theory method for resolving NEV selection problems with on online reviews. The innovations and contributions of this paper are outlined as follows:
(1)
This paper crawls online reviews from multiple websites and transforms them into five-granularity PLTSs. Compared with existing methods on online reviews which are from a single website and transformed into hesitant intuitionistic fuzzy sets and q-rung orthopair fuzzy numbers, the decision information concealed in the online reviews is more plentiful and more clearly express the sentiment orientations, including very positive, positive, neutral, negative and very negative sentiments.
(2)
Attribute weights are determined by building a bi-objective programming model based on the maximization deviation and the information entropy, by which the differences between alternatives and uncertainty of decision information on each attribute are both considered. However, in other studies, the attribute weights are assigned by experts subjectively or derived only depending on the maximization deviation. Thus, the attribute weights generated in this paper can more synthetically reflect the quality of the decision information and are more reliable.
(3)
A possible degree-based D–S theory method is proposed to rank alternatives. A dominant feature of this method is that it can quantitively measure and decrease uncertainty of decision information. Furthermore, it has a stronger distinguishing power compared with existing D–S theory methods.
The rest of this paper is arranged as follows: Section 2 conducts a brief literature review related to this study. Section 3 introduces some basic theories on the probabilistic linguistic term set and Dempster–Shafer theory methods. Section 4 develops a new sentiment analysis technique and proposes a possible degree-based D–S theory method in the PLTS context. Section 5 provides a case study of the proposed method to prove its practicability and rationality. Section 6 performs compares analyses this method with other methods from multiple perspectives. Section 7 summarizes the conclusions.

2. Literature Review

This section conducts literature reviews from two aspects, including decision methods for resolving NEV selection problems and MADM methods with online reviews in the probabilistic linguistic context.

2.1. Decision Methods for Resolving NEV Selection Problems

As the NEVs are more and more prevalent in the market, increasing NEV manufactures are arising. In this context, fruitful decision methods have been developed for evaluating and selecting NEVs. For example, Ziemba [18] investigated five NEV users to collect decision information in the form of real numbers. Then, attribute weights are assigned by experts and a New Easy Approach To Fuzzy PROMETHEE method is proposed to rank NEVs. By inviting experts to perform pairwise comparison, Huang et al. [19] set up preference relations with distributed linguistic term sets and furnished a TSOG-PSO approach to evaluate NEVs. According to the data from automobile manufactures, Dwivedi and Sharma [2] obtained attribute weights with Shannon entropy and ranked NEVs by the TOPSIS method. In these studies, the data do not come from NEV users or very few users, so the objectivity and comprehensiveness of data may be affected by the data source and the size.
In recent years, many NEV users post their experience and opinions on some social platforms such as autohome.com and xcar.com. A lot of potential users often look through such opinions before buying NEVs. More and more scholars are inclined to study how to evaluate and select best NEVs based on online reviews, and some novel decision methods have been raised depending on online reviews. Tian et al. [6] crawled online reviews from the “Autohome” platform and transformed them into hesitant intuitionistic fuzzy sets. Afterwards, combining subjective weights generated by best and worst method as well as objective weights derived by the maximization deviation, the comprehensive attribute weights were determined. At length, the extended ORESTE method was proposed to rank NEVs. Yang et al. [7] also collected comments based on the “Autohome” platform and then transformed them into q-rung orthopair fuzzy numbers. According to the cumulative prospect theory [20], candidate NEVs were ranked based on the attribute weights assigned by experts. Resorting to the “Dcar” platform, Liu et al. [21] mined online reviews and expressed them in the form of hesitant probabilistic fuzzy sets. Subsequently, the attribute weights were obtained based on the prospect values of alternatives and the TODIM-MULTIMOORA method was applied to sort NEVs.
The above research achievements gathered online reviews from a single website and transformed them into different fuzzy sets rather than linguistic term sets. In addition, the uncertainty of decision information is neglected while determining attribute weights. As users prefer to comment NEVs on different platforms and describe opinions in the form of natural languages, this paper crawls online reviews from multiple websites and converted them into five-granularity PLTSs, by which the decision information is abundant and retains the initial language format with the probability. Furthermore, the uncertainty of decision information is considered besides the deviations between evaluations on NEVs.

2.2. MADM Methods with Online Reviews in the PLTS Environment

Owing to the fact that the PLTSs can use linguistic terms associated with their probabilities to elaborate the comments in the form of languages as well as the frequency, it has been applied to resolve decision problems with online reviews in many fields and different MADM methods are presented [4,22,23,24,25]. For instance, Chen and Li [22] investigated the doctor ranking problems in the online consultation and proposed a Combined Compromise Solution method by transforming patients’ reviews into the three granularity PLTSs. By computing the sentiment scores with the sentiment analysis technique, Darko et al. [23] converted online reviews into five-granularity PLTSs and developed a PL-ELECTEE methodology for ranking mobile payment services. Similarly, Yang et al. [24] addressed the product selection problems and built an online product decision support system frame based on the regret theory. Zhao et al. [25] applied the TOPSIS method to resolve the hotel selection problems with online reviews. Wan et al. [26] puts forward an integrated method of prospect theory, DEMATEL (decision-making trial and evaluation laboratory) and QUALIFLEX (qualitative flexible) to resolve photovoltaic power station site selection problems.
Although the above-mentioned literature has transformed online reviews into PLTSs and successfully tackled lots of decision problems in many fields, only positive and negative sentiments are considered but the sentiment strength is ignored. Thus, the transformed decision information is unable to accurately indicate the opinions in the online reviews. For example, given two reviews, “the seat is comfortable” and “the seat is very comfortable”, though they are both positive sentiments, the strengths are obviously different and the latter is stronger. However, existing methods regarded them as the same strength. Moreover, the decision methods, such as TOPSIS, ELECTREE and the regret theory, are incapable of measuring and decreasing the uncertainty embodied in the decision information.
To fill the above-stated gaps, this paper studies the NEV selection problem based on online reviews. By considering noun, adjective, adverb and negation words, online reviews are directly converted into five-granularity PLTSs corresponding to different levels of sentiments, including very positive, positive, neutral, negative and very negative sentiments. Therefore, sentiment orientations and their strengths are both reflected. Furthermore, a possible degree-based D–S evidence theory method is proposed to quantitively measure and decrease the uncertainty of the decision information and derive reasonable decision results.

3. Preliminaries

This section reviews some definitions and methods to facilitate discussion in the sequel, such as the probabilistic linguistic term set, the D–S theory and existing D–S theory-based decision methods.

3.1. Probabilistic Linguistic Term Sets

Definition 1
[27]. Let S = s α α = η , , 1 , 0 , 1 , , η be a linguistic term set (LTS). Then, a probabilistic linguistic term set (PLTS) is defined as
L ( p ) = { s α ( l ) ( p ( l ) ) s α ( l ) S , p ( l ) 0 , l = 1 , 2 , , L ο , l = 1 L ο p ( l ) 1 } .
where s α ( l ) ( p ( l ) ) is a probabilistic linguistic element (PLE), representing the l th possible linguistic term s α ( l ) associated with the probability p ( l ) , and L ο is the number of all linguistic terms in L ( p ) .
Definition 2
[28]. Let S = s α α = η , , 1 , 0 , 1 , , η be an LTS; the linguistic term s α that expresses the equivalent information to the membership degree θ is obtained by a linguistic scale function f .
f :   [ s η , s η ] [ 0 , 1 ] , f ( s α ) = ( α + η ) / 2 η = θ .
Further, the membership degree θ that expresses the equivalent information to the linguistic variable s α is obtained as
f 1 :   [ 0 , 1 ] [ s η , s η ] , f 1 ( θ ) = s 2 η · θ η = s α .
Definition 3
[29]. The probability of missing information is evenly distributed among all linguistic terms in the linguistic term set. In this case, the standard PLTS can be expressed as
L ^ ( p ) = { s α l ( p ^ l ) s α ( l ) S , p ^ ( l ) 0 , l = 1 , 2 , , L ^ ο , l = 1 L ^ ο p ^ ( l ) = 1 } .
where p ^ ( l ) = p ( l ) + ( 1 l = 1 L ^ ο p l ) / ( 2 η + 1 ) .
Definition 4
[28]. Let L ^ ( p ) , L ^ 1 ( p ) , L ^ 2 ( p ) be three standard PLTSs; then, the basic operations of PLTSs are stipulated as
L ^ 1 ( p ) L ^ 2 ( p ) = s l 1 L ^ 1 ( p ) , s l 2 L ^ 2 ( p ) { f 1 [ f ( s l 1 ) + f ( s l 2 ) f ( s l 1 ) f ( s l 2 ) ] ( p ( l 1 ) p ( l 2 ) ) } ,
λ L ^ ( p ) = s ( l ) L ^ ( p ) { f 1 1 ( 1 f ( s ( l ) ) ) λ p ( l ) } .
where l 1 = 1 , 2 , , L ^ ο 1 , l 2 = 1 , 2 , , L ^ ο 2 , l = 1 , 2 , , L ^ ο .
Definition 5
[30]. Let L ^ ( p ) be a standard PLTS; the score function for L ^ ( p ) in the probabilistic linguistic form is defined as
E ( L ^ ( p ) ) = l = 1 L ^ ο ( f ( s α ( l ) ) p ^ ( l ) / l = 1 L ^ ο p ^ ( l ) ) .

3.2. Dempster–Shafer Theory

The D–S evidence theory, proposed by Dempster [31] and improved by Shafer [32], fuses the evidence from multiple sources with different attributes and can explicitly measure the uncertainty in decision making.
Definition 6
[32]. Let Θ = { A 1 , A 2 , , A i , A m } be a frame of discernment, a set of some collectively exhaustive and mutually exclusive hypotheses. A mass function, which is also called a basic probability assignment (BPA), is a mapping m : 2 Θ [ 0 , 1 ] , satisfying
D Θ m ( D ) = 1 and   m ( Ø ) = 0 .
where Ø is an empty set and D is a subset of Θ . 2 Θ , the power set of Θ , consists of all subsets of Θ . The probability mass m ( D ) measures the belief exactly assigned to D and represents how strongly the evidence supports D . The probability mass raised to Θ , i.e., m ( Θ ) , is called the ignorance degree. If m ( D ) > 0 , then D is called a focal element, and all focal elements constitute a body of evidence.
Definition 7
[31]. Let m 1 and m 2 be two independent evidences defined on the same frame of discernment Θ ; the combination of m 1 and m 2 , denoted by m = m 1 m 2 , is defined as
m ( A ) = 1 1 K B , C 2 Θ B C = A m 1 ( B ) m 2 ( C ) , A Ø 0 , A = Ø
where K = B , C 2 Θ B C = Ø m 1 ( B ) m 2 ( C ) , and K represents the conflict coefficient between m 1 and m 2 .
Definition 8
[13]. For a proposition B 2 Θ , the confidence function B e l : 2 Θ [ 0 , 1 ] and the plausibility function P l : 2 Θ [ 0 , 1 ] are, respectively, defined as
B e l ( B ) = Ø C B m ( C ) ,
P l ( B ) = C B Ø m ( C ) .
In the special case when B = { A i } , Equations (10) and (11) can be simplified as
B e l ( { A i } ) = m ( { A i } ) ,
P l ( { A i } ) = C { A i } Ø m ( C ) .
where B e l ( B ) and P l ( B ) are, respectively, the upper and lower bound functions of proposition B , satisfying B e l ( B ) P l ( B ) . Because P l is the plausibility function and B e l is the confidence function, they can also be called the upper and lower bounds of trust. Therefore, [ B e l ( B ) , P l ( B ) ] can be used as the trust interval of proposition B .

3.3. Existing D–S Evidence Theory-Based Methods

Xiao [13] and Fei et al. [33] proposed different D–S evidence theory-based methods for ranking alternatives. Xiao [13] applied confidence values of alternatives to rank alternatives. In this case, when two alternatives with the same confidence values cannot be differentiated, please see Example 1. To make up for this defect, Fei et al. [33] suggested sorting alternatives based on the center of confidence and plausibility values. Unfortunately, although the distinguishing power of method [33] is stronger than that of method [13], it is unable to discriminate alternatives which have identical centers; please see Example 2.
Example 1.
Suppose Θ = { A 1 , A 2 , A 3 , A 4 } is an alternative set and the mass function values of alternatives are shown as Table 1.
According to Equation (9), the comprehensive mass functions of alternatives A 1 and A 3 are obtained as m ( { A 1 } ) = 0.2522 and m ( { A 3 } ) = 0.2522 . By virtue of Equation (10), one obtains B e l ( { A 1 } ) = 0.2522 and B e l ( { A 3 } ) = 0.2522 . Thus, one has B e l ( { A 1 } ) = B e l ( { A 3 } ) . According to the method [13], it is concluded that A 1 is indifferent from A 3 . In other words, the method [13] cannot distinguish which alternative is more preferred.
In light of Equation (11), one obtains P l ( { A 1 } ) = 0.3744 and P l ( { A 3 } ) = 0.3372 . Hence, we have 1 2 ( B e l ( A 1 ) + P l ( A 1 ) ) > 1 2 ( B e l ( A 3 ) + P l ( A 3 ) ) . Employing the method [33], it obtains as A 1 is better than A 3 . At this point, the distinguish power of the method [33] is stronger than that of the method [13]. However, there still exist some cases where the method [33] is unable to differentiate two alternatives; please see Example 2.
Example 2.
Suppose Θ = { A 1 , A 2 , A 3 , A 4 } is an alternative set and mass function values of some subsets are listed in Table 2.
By Equations (9)–(11), we have 1 2 ( B e l ( { A 1 } ) + P l ( { A 1 } ) ) = 1 2 ( B e l ( { A 3 } ) + P l ( { A 3 } ) ) = 0.3251 , and the distribution is shown in Figure 1. According to the method [33], the alternatives A 1 and A 3 cannot be distinguished.
In a word, Example 1 and Example 2 demonstrate that the distinguishing powers of methods [13,33] are not strong enough. Therefore, it is necessary to improve these methods to strengthen their distinguishing powers.

4. A Possible Degree-Based D–S Evidence Theory Method with Online Reviews in the PLTS Context

In this section, a possible degree-based D–S evidence theory method is proposed to resolve decision problems with online reviews in the PLTS environment. By sentiment analyses, online reviews crawled from multiple websites are transformed into five-granularity PLTSs. Thus, a decision matrix corresponding to each website is constructed. Subsequently, a bi-objective programming model is set up to determine attribute weights. Finally, by introducing a confidence–plausibility-based possibility degree, a possible degree-based D–S evidence theory method is presented to rank alternatives for each website. At length, a 0–1 programming model is constructed to obtain the final alternative ranking results.

4.1. Problem Description

Let A = { A 1 , A 2 , A m } be the set of candidate alternatives and the online reviews are crawled from q popular websites. Suppose the attribute set is C = { C 1 , C 2 , C n } with weight vector w ¯ ( k ) = ( w ¯ 1 ( k ) , w ¯ 2 ( k ) , , w ¯ j ( k ) , , w ¯ n ( k ) ) T ( k = 1 , 2 , , q ) , where w ¯ j ( k ) indicates the weight of attributes for the k th website and satisfies j = 1 n w ¯ j ( k ) = 1 . The set U i j ( k ) = { u i j 1 ( k ) , u i j 2 ( k ) , , u i j Π i j ( k ) ( k ) } represents the review of alternative A i on attribute C j from the k th website. u i j π ( k ) is the π th review alternative A i on attribute C j from the k th website.

4.2. Determine the Decision Matrix with PLTSs Based on Online Reviews

This section collects online reviews on NEVs and transforms them into PLTSs with a five-granularity linguistic term set. Thus, the decision matrix with PLTSs is formed. In this process, three issues are involved: (1) data collection and preprocessing; (2) convert online reviews into appropriate linguistic terms; (3) determine probabilities corresponding to linguistic terms.

4.2.1. Data Collection and Preprocessing

Using the GooSeeker information collection tool, online reviews, which are posted with respect to the given attributes, are collected from q popular websites, such as “autohome.com.cn”, “xcar.com.cn”, “pcauto.com.cn” accessed on 2 February, 2024. Afterwards, the collected data are preprocessed with Jieba in Python2021.1 software, including deleting duplicated reviews, useless words and word segmentation.

4.2.2. Convert Online Reviews into Appropriate Linguistic Terms by Sentiment Analyses

In order to transform preprocessed reviews into linguistic terms, it is necessary to conduct sentiment analyses. For different attributes, sentiment words are distinct and even the sentiment orientations for the same sentiment word are opposite. For example, the sentiment of the word “lower” is positive when it expresses the attribute “price”. However, its sentiment is negative when it describes the comfort degree. Therefore, to ensure the accuracy of the sentiment analysis results, this paper compiles different sentiment dictionary associated with attributes. According to the common CNKI sentimental vocabulary, attribute sentiment dictionaries are compiled by adding some words appearing more than 100 times in the reviews and dividing the sentiment words in the CNKI vocabulary into different attributes. For convenience, Table 3 lists some symbols appearing in this section.
For a review u i j π ( k ) , it may include both positive and negative comments. Thus, it is natural to determine the sentiment orientation based on the numbers of positive and negative words. Let S = { s 2 = more   annoying , s 1 = annoying , s 0 = general , s 1 = like , s 2 = more   like } be a linguistic term set. According to the compiled attribute dictionary, a statistic is conducted on the positive and negative sentiment words as well as positive and negative degree adverb words. If P i j π ( k ) > N i j π ( k ) and P D i j π ( k ) > N D i j π ( k ) , it is deduced that the number of positive sentiment words is larger than that of negative sentiment words. Furthermore, the number of the positive degree words is greater than that of negative degree words. In this case, the review u i j π ( k ) represents that the user is more satisfied with the i th alternative with respect to the j th attribute. Therefore, this review is assigned as “ s 2 ”. For example, the π th review from k th website, evaluating the i th type of car on the j th attribute is “The screen is very large and clear, but the sound quality is bad”. In this review, the words “large” and “clear” are both positive sentiment words, but the word “bad” is a negative sentiment word. In addition, the word “very” is a positive degree word because it follows the positive sentiment word “big”. Thus, one derives P i j π * ( k ) = 2 > N i j π ( k ) = 1 and P D i j π ( k ) = 1 > N D i j π ( k ) = 0 , so this review is assigned as “ s 2 ”. Similarly, if P i j π ( k ) > N i j π ( k ) but P D i j π ( k ) = N D i j π ( k ) , this review is assigned as “ s 1 ”. According to this idea, Algorithm 1 is designed to transform reviews into corresponding linguistic terms.
Algorithm 1. Transforming reviews into linguistic terms
Input: the preprocessed user reviews U i j ( k ) = { u i j 1 ( k ) , u i j 2 ( k ) , , u i j Π i j ( k ) ( k ) } , k = 1 , 2 , , q , i = 1 , 2 , , m , j = 1 , 2 , , n , π = , 1 , 2 , Π i j ( k ) ; positive word set P V , negative word set N V , degree adverb word set D G V and deny word set D N V of the new sentiment dictionary; linguistic term set S = { s 2 , s 1 , s 0 , s 1 , s 2 } .
Output: the sentimental orientation γ i j π ( k ) of u i j π ( k ) .
1: for  k = 1 q  do
2:  for  i = 1 m  do
3:   for  j = 1 n  do
4:    for  π = 1 Π i j ( k )  do
5:    let  P i j π ( k ) = 0 , N i j π ( k ) = 0  do
6:     if  ( u i j π ( k ) P V ) Ø     ( u i j π ( k ) D N V ) = Ø  then  P i j π ( k ) | u i j π ( k ) P V |
7:     else if  ( u i j π ( k ) N V ) Ø     ( u i j π ( k ) D N V ) = Ø  then  N i j π ( k ) | u i j π ( k ) N V |
8:     else if  ( ( u i j π ( k ) P V ) Ø     ( u i j π ( k ) N V ) Ø )     ( D N = ( u i j π ( k ) D N V ) Ø )  then
9:      if  min ( d i s ( P V , D N ) ) min ( d i s ( N V , D N ) )  then  P i j π ( k ) P i j π ( k ) + 2 | u i j π ( k ) D N V |
10:       else  N i j π ( k ) N i j π ( k ) + 2 | u i j π ( k ) D N V |
11:       end if
12:      else P i j π ( k ) 0 , N i j π ( k ) 0
13:      end if
14:      let  P D i j π ( k ) = 0 , N D i j π ( k ) = 0  do
15:       if  ( D G = ( u i j π ( k ) D G V ) Ø )   ( min ( d i s ( P V , D G ) ) min ( d i s ( N V , D G ) ) )  then  N D i j π ( k ) | u i j π ( k ) D G V |
16:       else if  ( D G = ( u i j π ( k ) D G V ) Ø )     ( min ( d i s ( P V , D G ) ) < min ( d i s ( N V , D G ) ) )  then P D i j π ( k ) | u i j π ( k ) D G V |
17:       else  P D i j π ( k ) 0 , N D i j π ( k ) 0
18:       end if
19:       if  ( P i j π ( k ) > N i j π ( k ) )     ( P D i j π ( k ) > N D i j π ( k ) )  then  γ i j π ( k ) = s 2
20:       else if  ( ( P i j π ( k ) > N i j π ( k ) )     ( P D i j π ( k ) = N D i j π ( k ) ) )     ( ( P i j π ( k ) = N i j π ( k ) )     ( P D i j π ( k ) > N D i j π ( k ) ) )  then  γ i j π ( k ) = s 1
21:       else if  ( P i j π ( k ) = N i j π ( k ) = 0 )     ( ( P i j π ( k ) > N i j π ( k ) )     ( P D i j π ( k ) < N D i j π ( k ) ) )     ( ( P i j π ( k ) = N i j π ( k ) )     ( P D i j π ( k ) = N D i j π ( k ) ) )  
               ( ( P i j π ( k ) < N i j π ( k ) )     ( P D i j π ( k ) > N D i j π ( k ) ) )  then  γ i j π ( k ) = s 0
22:       else if  ( ( P i j π ( k ) = N i j π ( k ) )     ( P D i j π ( k ) < N D i j π ( k ) ) )     ( ( P i j π ( k ) < N i j π ( k ) )     ( P D i j π ( k ) = N D i j π ( k ) ) )  then  γ i j π ( k ) = s 1
23:       else  γ i j π ( k ) = s 2
24:       end if
25:      end for
26:     end for
27:    end for
28:   end for
29:  end for
30: end for
Note: d i s ( P V , D G ) is computed by the number of words between positive words and degree adverb words. Similarly, d i s ( P V , D N ) d i s ( N V , D G ) and d i s ( N V , D N ) are calculated. Table 4 illustrates the calculation process. | A | represents the number of elements in A .
To understand Algorithm 1 visually Table 4 and Table 5 provide some examples illustrating the process of transforming a review into a linguistic term.

4.2.3. Determining the Probability Corresponding to Linguistic Terms

In the process of transforming the reviews into PLTSs, it is needed to determine the probability corresponding to the linguistic terms derived in Section 4.2.2. We denote by T i ( k ) number of users evaluating the i th alternative in the k th website. Generally, users evaluate alternatives on all given attributes. However, sometimes, a few users may miss evaluations on alternatives with some attributes. Suppose Π i j ( k ) represents the number of reviews on the i th alternatives with respect to the j th attribute for the k th website, and Ø i j ( k ) indicate the number of users who have missed evaluations on the i th alternative with the j th attribute for the k th website. Thus, we have T i ( k ) = Π i j ( k ) + Ø i j ( k ) . Furthermore, assume G α i j ( k ) is the number of reviews which come from the k th website and evaluate the i th alternative as s α with respect to the j th attribute, and let p ˜ α i j ( k ) be the initial probability of the linguistic term s a which is one of a possible linguistic term transformed from all reviews u i j π ( k ) . p ˜ Ø i j ( k ) means the proportion associated with missing information. Thus, the probabilities p ˜ α i j ( k ) and p ˜ Ø i j ( k ) can be obtained as
p ˜ α i j ( k ) = G α i j ( k ) / T i ( k ) ,
p ˜ Ø i j ( k ) = Ø i j ( k ) / T i ( k ) .
where T i ( k ) represents the number of users under the i th alternative for the k th website.
Remark 1.
In the  k  th website, if a user does not provide the evaluation on the  i th alternative with the  j th attribute, any linguistic term may be possible. Thus, the probability  p ˜ Ø i j ( k )  can be equally divided to all possible linguistic terms.
According to Remark 1, the adjusted probability associated with possible linguistic term s a , denoted by p α i j ( k ) , can be derived as
p α i j ( k ) = p ˜ α i j ( k ) + p ˜ Ø i j ( k ) / ( 2 η + 1 ) .
where η represents the cardinality of the given linguistic term set S .
The aim of Equation (16) is to divide the probability p ˜ Ø i j ( k ) equally into all possible linguistic terms. Thus, obtained probabilistic linguistic terms are standardized PLTSs in which the sum of probabilities associated with all possible linguistic terms is equal to 1.
In virtue of the linguistic terms s i j π ( k ) associated with probability p α i j ( k ) , a decision matrix for the k th website, denoted by D ( k ) = [ h i j ( k ) ( p ) ] m × n is constructed, where h i j ( k ) ( p ) = { s a i j ( k ) ( p α i j ( k ) ) | α = 2 , 1 , 0 , 1 , 2 } .

4.3. Determine Attribute Weights by Constructing a Bi-Objective Programming Model

Attribute weights play an important role while ranking alternatives. Therefore, how to determine attribute weights is a key issue. On the one hand, there exists uncertainty in the decision information expressed by PLTSs. Hence, the smaller the uncertainty of decision information with respect to an attribute, the weight of this attribute should be assigned a greater value. On the other hand, according to the principle of maximizing deviations between decision information, the more the deviations between attribute values associated with the same attribute, the bigger this attribute weight is. Keeping this idea in mind, by minimizing the uncertainty and maximizing deviations between alternatives, a bi-objective programming model is constructed to determine attribute weights objectively.

4.3.1. A Minimum Probabilistic Linguistic Entropy Model

According to the JS entropy proposed by [34], the uncertainty degree of attribute C j , denoted by H J S ( k ) ( C j ) , can be calculated as
H J S ( k ) ( C j ) = Θ τ j 2 A m ( Θ τ j ) log ( | Θ τ j | ) A i j ( k ) A P l _ P m ( A i j ( k ) ) log [ P l _ P m ( A i j ( k ) ) ] .
where Θ τ j represents τ th element of the power set 2 A respect to C j , and | Θ τ j | is the cardinality of the set Θ τ j . As only the singleton set comprised of a single alternative is considered in this Section 5.2, it obtains | Θ τ j | = 1 , so log ( | Θ τ j | ) = 0 . Therefore, the JS entropy in Equation (17) is degenerated into the Shannon entropy, i.e.,
H J S ( k ) ( C j ) = A i j ( k ) A P l _ P m ( A i j ( k ) ) log [ P l _ P m ( A i j ( k ) ) ] .
where A i j ( k ) is the i th alternative with respect to the j th attribute for the k th website, P l _ P m ( A i j ( k ) ) = P l ( A i j ( k ) ) i = 1 m P l ( A i j ( k ) ) = E h i j ( k ) ( p ) i = 1 m E h i j ( k ) ( p ) , E h i j ( k ) ( p ) = l = 1 L ο f i j ( k ) ( s α ( l ) ) p α i j ( k ) ( l ) / l = 1 L ο p α i j ( k ) ( l ) , and P l _ P m ( A i j ( k ) ) represents the plausibility transformation function of A i j ( k ) .
By normalizing H J S ( k ) ( C j ) , the standard entropy of attribute C j for the kth website can be obtained, i.e.,
H ¯ J S ( k ) ( C j ) = H J S ( k ) ( C j ) j = 1 n H J S ( k ) ( C j ) .
Thus, the weighted uncertainty of decision information from the k th website can be expressed as
H ˜ J S ( k ) = j = 1 n w j ( k ) H ¯ J S ( k ) ( C j ) .
where w j ( k ) is the weight of attribute C j for the k th website.
As mentioned before, a smaller uncertainty with respect to an attribute means a bigger weight associated with this attribute. Hence, by minimum probabilistic linguistic entropy for the k th website, an optimal programming model is built as
min H ˜ J S ( k ) s . t . 0 < w j ( k ) < 1 j = 1 n w j ( k ) 2 = 1 .

4.3.2. Construct a Bi-Objective Programming Model to Determine Attribute Weights

On the other hand, the total deviations between alternatives for the k th website, denoted by σ ( k ) ( w ) , can be expressed as
σ ( k ) ( w ) = j = 1 n w j ( k ) i = 1 m t = 1 m E h i j ( k ) ( p ) E h t j ( k ) ( p ) 2 .
Maximizing the total deviations, the other optimal model is constructed as below:
max σ ( k ) ( w ) s . t . 0 < w j ( k ) < 1 j = 1 n w j ( k ) 2 = 1 .
Combining Equations (21) and (23), a bi-objective programming model is generated as
max σ ( k ) ( w ) min H ˜ J S ( k ) s . t . 0 < w j ( k ) < 1 j = 1 n w j ( k ) 2 = 1 .
By setting a balancing coefficient ε , Equation (24) is transformed into a single-objective programming model and can be expressed as
min j = 1 n ε H ˜ J S ( k ) ( 1 ε ) σ ( k ) ( w ) s . t . 0 < w j ( k ) < 1 j = 1 n w j ( k ) 2 = 1
By solving Equation (25), w j ( k ) are obtained. In order to avoid the interference of non-positive values, the obtained weights are improved as
w ^ j ( k ) = e w j ( k ) .
By standardizing w ^ j ( k ) ( j = 1 , 2 , , n ) , the standardized attribute weights are derived as
w ¯ j ( k ) = e w j ( k ) j = 1 n e w j ( k ) .
Thereby, the weighted decision matrix is generated as
D w ( k ) = [ x i j ( k ) ] m × n = w ¯ j ( k ) D ( k ) .

4.4. A Possible Degree-Based D–S Evidence Theory Method for Ranking Alternatives in the PLTS Context

In this section, a possible degree-based D–S evidence theory method is proposed to rank alternatives in the probabilistic linguistic environment. In this method, a new mass function is defined by fusing score values and the proposed entropy in Section 4.3.1. According to the defined mass function, a novel confidence function and a plausibility function of alternatives are introduced, respectively. Afterwards, a confidence–plausibility-based possible degree is recommended for deriving the ranking values of alternatives for the kth website. Finally, to derive the final rankings of alternatives, a 0–1 programming model is established for synthesizing the ranking results for different websites.

4.4.1. New Confidence and Plausibility Functions

According to the score function in Equation (7), the BPA of alternative A i with respect to the attribute C j for the k th website, denoted by m C j ( k ) ( A i ) can be calculated as
m C j ( k ) ( Ø ) = 0 ,
m C j ( k ) ( A i ) = E ¯ ( x i j ( k ) ) ( 1 H ¯ J S ( k ) ( C j ) ) ,
m C j ( k ) ( Θ ) = 1 i = 1 m m C j ( k ) ( A i ) .
where E ¯ ( x i j ( k ) ) = E ( x i j ( k ) ) t = 1 m E ( x t j ( k ) ) , E ( x i j ( k ) ) = l = 1 L ο f ( s α i j ( k ) ( l ) ) p α i j ( k ) ( l ) / l = 1 L ο p α i j ( k ) ( l ) , Θ = { A 1 , A 2 , , A m } .
Then, according to the Dempster’s combination rule (see Definition 7), the integrated BPAs of alternatives are generated by fusing BPAs with regards to all attributes. That is
m C ( k ) ( A i ) = ( ( m C 1 ( k ) ( A i ) m C 2 ( k ) ( A i ) ) ) m C n ( k ) ( A i ) .
According to Equations (12) and (13) in Section 3.2, the confidence and the plausibility functions of alternative A i with regard to k th website, denoted by B e l ( k ) ( A i ) and P l ( k ) ( A i ) can be designed as
B e l ( k ) ( A i ) = m C ( k ) ( A i ) ,
P l ( k ) ( A i ) = B { A i } Ø m C ( k ) ( B ) .
To resolve the drawbacks in methods [13,33] mentioned in Section 3.3, this section proposes a possible degree ranking method by combining confidence and plausibility functions. According to the interval possibility degree [35,36], we define the possibility degrees of supporting interval and trust interval depicted in Figure 2.

4.4.2. A Confidence–Plausibility-Based Possible Degree for Ranking Alternatives for Each Website

Definition 9.
Let B e l ( k ) ( A i ) and B e l ( k ) ( A t ) be, respectively, the confidence values of alternatives A i and A t for the k th website; then, the supporting possibility degree of alternative A i prior to A t , denoted by C P ( k ) A i _ A t , is defined as
C P ( k ) A i _ A t = 1 2 1 + B e l ( k ) A i 0 0 B e l ( k ) A t B e l ( k ) A i 0 + B e l ( k ) A t 0 = 1 2 1 + B e l ( k ) A i B e l ( k ) A t B e l ( k ) A i + B e l ( k ) A t .
Definition 10.
Suppose T A i ( k ) = [ B e l ( k ) ( A i ) , P l ( k ) ( A i ) ] and T A t ( k ) = [ B e l ( k ) ( A t ) , P l ( k ) ( A t ) ] are, respectively, the trust intervals of alternatives A i and A t for the k th website; then, the trust possibility degree of alternative A i prior to A t , denoted by T P ( k ) A i _ A t , can be defined as
T P ( k ) A i _ A t = 1 2 1 + P l ( k ) ( A i ) B e l ( k ) ( A t ) B e l ( k ) ( A i ) P l ( k ) ( A t ) P l ( k ) ( A i ) B e l ( k ) ( A i ) + P l ( k ) ( A t ) B e l ( k ) ( A t ) .
Fusing the possibility degrees C P ( k ) A i _ A t and T P ( k ) A i _ A t with a balance coefficient v ( 0 v 1 ) , the comprehensive possibility degree of alternative A i prior to A t for the k th website is obtained as
P ( k ) A i _ A t = v C P ( k ) A i _ A t + ( 1 v ) T P ( k ) A i _ A t .
Thereby, the total possibility degree of alternative A i prior to A t can be calculated as:
P ( k ) ( A i ) = t = 1 m P ( k ) A i _ A t .
By virtue of Equation (38), all alternatives are sorted for the k th website.
Remark 2.
The confidence–plausibility-based possible degree ranking method absorbs the virtues of method [33] and has a stronger distinguishing power compared with method [13], which can be verified by Example 3 and Example 4.
Example 3.
Continuing Example 1, the calculation of the confidence and plausibility values of the alternatives is shown in Table 6.
According to Equations (35)–(37), let v = 0.65 , then P A 1 _ A 3 = 0.53143 > 0.5 ; thus, it generates A 1 A 3 .
Example 4.
Continuing Example 2, according to Equations (35)–(37), let v = 0.65 ; then, P A 1 _ A 3 = 0.53478 > 0.5 ; thus, it generates A 1 A 3 .

4.4.3. Build a 0–1 Programming Model for Obtaining the Final Alternative Ranking Orders

After deriving the rankings in each website with the confidence–plausibility-based possible degree ranking method, how to obtain the final alternative ranking orders is still an important issue. This section builds a 0–1 programming model to derive the final alternative ranking orders.
For the sake of convenience, suppose the ranking position of alternative A i for the k th website is R i ( k ) , and the final ranking of alternative A i is the r th position, denoted by R i r . It is natural that the consensus degree between R i ( k ) and R i r should be maximized as much as possible. The consensus degree index between R i ( k ) and R i r , denoted by V i r ( k ) , can be expressed as
V i r ( k ) = 1 R i ( k ) R i r m 1 .
Among all websites, the total consensus degree of the i th alternative sorted in the r th position can be expressed as
V i r = k = 1 q V i r ( k ) .
Thus, the consensus matrix M can be constructed as
M = V i r m × m .
Suppose a 0–1 variable y i r as
y i r = 1 ,   the   final   ranking   of   A i   is   the   r - th   position   0 ,   else .
In order to determine the final sorting of alternatives, a 0–1 programming model is constructed by maximizing consensus degrees, i.e.,
max z = i = 1 m r = 1 m y i r V i r s . t . V i r = k = 1 q ( 1 R i ( k ) R i r m 1 ) i = 1 m y i r = 1 ,   r = 1 , 2 , , m r = 1 m y i r = 1 ,   i = 1 , 2 , , m y i r = 0   o r   1 ,   i , r = 1 , 2 , , m .
Let χ i r ( k ) = R i ( k ) R i r , and Equation (43) can be easily transformed into a 0–1 linear programming model as
max z = i = 1 m r = 1 m y i r k = 1 q ( 1 χ i r ( k ) m 1 ) s . t . χ i r ( k ) R i ( k ) R i r           ( i , r = 1 , 2 , , m ) χ i r ( k ) R i r R i ( k )         ( i , r = 1 , 2 , , m ) i = 1 m y i r = 1               ( r = 1 , 2 , , m ) r = 1 m y i r = 1   ( i = 1 , 2 , , m ) y i r = 0   o r   1 ,   ( i , r = 1 , 2 , , m ) .
Solving Equation (44), the collective ranking matrix Y = ( y i r ) m × m can be derived. Thus, the final ranking order of the alternatives can be easily obtained and the best alternative is determined.

4.5. The Structure of the Possible Degree-Based D–S Evidence Theory Method in the PLTS Context

Based on the above analysis, the decision process for the possible degree-based D–S evidence theory method in the PLTS context is summarized as below.
Step 1. Crawl online reviews from multiple websites and preprocess them.
Step 2. Determine evaluating attributes based on ones supplied in the websites.
Step 3. According to Algorithm 1, the linguistic terms s α corresponding to each review are determined.
Step 4. Determine the probability p α i j ( k ) corresponding to linguistic terms s α by Equations (14)–(16).
Step 5. Determine attribute weights by Equations (25)–(27).
Step 6. Establish a weighted decision matrix D w ( k ) by Equation (28).
Step 7. Calculate BPAs of alternatives on each attribute and the integrated BPAs by Equations (29)–(32).
Step 8. Compute confidence values and plausibility values of alternatives based on Equations (33) and (34).
Step 9. Calculate the possibility degrees of the support and trust intervals between alternatives by Equations (35) and (36).
Step 10. Calculate the total possible degrees of alternatives by Equation (37) and rank them in each website.
Step 11. Obtain the total possibility degree P ( k ) ( A i ) by Equation (38), and sort accordingly.
Step 12. Determine the final ranking orders of alternatives by Equation (44) and select the best one.
The main steps of the method proposed are shown in Figure 3.

5. A Case Study of Selecting New Energy Vehicles

In this section, we provide a case study of selecting new energy vehicles to demonstrate the application of the proposed method.

5.1. Description of the Problem

In recent years, in order to protect the environment and reduce carbon emissions, the Chinese government strongly advocates new energy vehicles gradually replacing oil-fueled vehicles and has promulgated some policies, such as exempting them from the purchase tax, freely offering the equipment for charging batteries and distributing financial subsidies for NEV manufacturers. With the strong support of these policies, more and more car manufacturers have shifted their focuses from oil-fueled vehicles to NEVs. Meanwhile, a lot of consumers are inclined to NEVs. In this case, it is difficult for consumers to select a satisfactory NEV from many brands of cars because they usually are not familiar with the professional car knowledge and structures. Hence, a car consumer often browses online reviews on some E-commerce platforms before buying a NEV, such as autohome.com.cn, xcar.com.cn and so on. Now, we seek five popular types of NEVs, including AIONS ( A 1 ), Tesla Model Y ( A 2 ), Great Wall Euler Good Cat ( A 3 ), BYD Han ( A 4 ) and BYD Qin ( A 5 ). In what follows, we apply the proposed method to recommend the best popular type of NEVs based on online reviews from three platforms, i.e., autohome.com ( t 1 ), xcar.com ( t 2 ) and pcauto.com ( t 3 ).

5.2. Solving Process by the Proposed Method

Step 1. Crawl and preprocess online reviews on each type of NEVs from websites.
Using the GooSeeker information collection tool, we crawled 200 user online reviews on each type of NEV mentioned above from the websites t 1 , t 2 and t 3 . The contents included in each review are shown in Figure 4. It is observed from Figure 4 that each review provides comments on each attribute given by this website. Some online reviews are listed in Table A1 in Appendix A. Subsequently, we preprocessed the crawled reviews, such as deleting duplicate reviews, segregating words, cleaning and filtering noises including emoticons and incorrect sentences. Due to the limit of space, we only show the preprocessed reviews of website t 1 on alternatives with respect to all attributes, please see Table 7.
Step 2. Determine evaluating attributes.
According to Figure 4, each website provides different evaluating criteria. To entirely evaluate alternatives, we choose the criteria provided by the website which supplies the most criteria as evaluating attributes. The determined attributes are C = { C 1 , C 2 , C 3 , C 4 , C 5 , C 6 , C 7 } , including the space ( C 1 ), the comfort ( C 2 ), the battery life ( C 3 ), the appearance ( C 4 ), the interior ( C 5 ), the cost performance ( C 6 ) and the intelligence ( C 7 ).
Step 3. Transform preprocessed online reviews into linguistic terms.
By Algorithm 1 and the compiled sentimental dictionary, the preprocessed online reviews in Step 1 are transformed into proper linguistic terms. For the detailed transforming process, please see Table 5 in Section 4.2.2.
Step 4. Compute the probabilities associated with linguistic terms.
To facilitate computing probability, we count the number of reviews corresponding to each linguistic term and the number of users who miss evaluations on some attributes, i.e., G α i j ( 1 ) and Ø i j ( 1 ) . The statistical results are shown in Table A2 in Appendix A. According to Equations (14) and (15), the probability p ˜ α i j ( k ) and p ˜ Ø i j ( k ) are calculated and shown as Table A3 in Appendix A. By virtue of Equation (16), the decision matrix D ( k ) = [ h i j ( k ) ( p ) ] m × n with PLTSs is obtained and listed in Table 8; for details, please refer to Matrix A1 in Appendix A.
Step 5. Determine attribute weights.
According to Equations (17)–(19), the normalized uncertainty degrees of each attribute are calculated as H ¯ J S ( 1 ) ( C 1 ) = 0.1428 , H ¯ J S ( 1 ) ( C 2 ) = 0.1429 , H ¯ J S ( 1 ) ( C 3 ) = 0.1428 , H ¯ J S ( 1 ) ( C 4 ) = 0.1429 , H ¯ J S ( 1 ) ( C 5 ) = 0.1428 , H ¯ J S ( 1 ) ( C 6 ) = 0.1428 , H ¯ J S ( 1 ) ( C 7 ) = 0.1429 . Subsequently, the balancing coefficient ε in Equation (25) is set to 0.5 and solved. Using Equations (26) and (27), the normalized attribute weight vector w ¯ ( 1 ) is w ¯ ( 1 ) = ( 0.1502 , 0.1168 , 0.1550 , 0.1089 , 0.1741 , 0.1893 , 0.1057 ) T .
Step 6. Derive the weighted decision matrix.
In light of Equation (28) and the operations of PLTSs in Definition 2, the weighted decision matrix D w ( k ) = [ x i j ( k ) ] m × n is derived. Please see Matrix A2 in Appendix A.
Step 7. Calculate the BPAs of alternatives on each attribute and the integrated BPAs.
By Equations (29)–(31), the BPAs of alternatives are calculated and shown in Table 9. Furthermore, the integrated BPAs are generated by Equation (32) as m C ( 1 ) ( A 1 ) = 0.1612 , m C ( 1 ) ( A 2 ) = 0.1071 , m C ( 1 ) ( A 3 ) = 0.2119 , m C ( 1 ) ( A 4 ) = 0.2640 , m C ( 1 ) ( A 5 ) = 0.2550 , m C ( 1 ) ( Θ ) = 0.0008 .
Step 8. Compute the confidence values and plausibility values of alternatives.
In virtue of Equations (33) and (34), the confidence values and the plausibility values of alternatives, i.e., B e l ( 1 ) ( A i ) and P l ( 1 ) ( A i ) , are computed and shown in Table 10.
Step 9. Calculate the possibility degrees of the support and trust intervals between alternatives.
By Equations (35) and (36), the possibility degree matrices associated with the support interval and trust interval are obtained as
C P ( 1 ) ( A i _ A t ) = 0.5000 0.6009 0.4320 0.3792 0.3874 0.3991 0.5000 0.3356 0.2886 0.2957 0.5680 0.6644 0.5000 0.4453 0.4539 0.6208 0.7114 0.5547 0.5000 0.5087 0.6126 0.7043 0.5461 0.4913 0.5000 T P ( 1 ) ( A i _ A t ) = 0.5 1 0 0 0 0 0.5 0 0 0 1 1 0.5 0 0 1 1 1 0.5 1 1 1 1 0 0.5
Step10. Calculate the total possible degree of each alternative and rank alternatives in each website.
Taking v = 0.65 , the comprehensive possible degree matrix is calculated by Equation (37) as
P ( 1 ) ( A i _ A t ) = 0.5000 0.7406 0.2808 0.2465 0.2518 0.2594 0.5000 0.2182 0.1876 0.1922 0.7192 0.7818 0.5000 0.2895 0.2950 0.7535 0.8124 0.7105 0.5000 0.6806 0.7482 0.8078 0.7050 0.3194 0.5000
Then, the total possibility degrees of alternatives are derived by Equation (38) as P ( 1 ) ( A 1 ) = 2.0197 , P ( 1 ) ( A 2 ) = 1.3574 , P ( 1 ) ( A 3 ) = 2.5855 , P ( 1 ) ( A 4 ) = 3.4571 , P ( 1 ) ( A 5 ) = 3.0803 .
According to the total possibility degrees of alternatives, the ranking order of alternatives for the website t 1 is A 4 A 5 A 3 A 1 A 2 .
Similarly, repeating Steps 1 to 9, the ranking orders of alternatives for websites t 2 and t 3 are, respectively, obtained as A 4 A 1 A 3 A 5 A 2 and A 5 A 4 A 3 A 1 A 2 .
Step 11. Determine the final ranking orders of alternatives and select the best one.
According to the 0–1 programming model in Equation (44), the collective ranking matrix Y = ( y i r ) m × m is yielded as
Y = 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0
Hence, the final ranking order is A 4 A 5 A 3 A 1 A 2 and the best alternative is BYD Han.

6. Sensitivity Analysis and Comparative Analysis

To demonstrate the advantages of the proposed method, this section conducts the sensitivity analysis of the balancing coefficient ε in the single-objective model (i.e., Equation (25)) and comparative analyses with existing methods.

6.1. Sensitivity Analysis of the Balancing Coefficient ε

In Section 5.2, we take ε = 0.5 . This section performs the sentiment analysis of ε . By taking the value of ε from 0 to 0.9, the results are obtained and shown in Table 11.
As shown in Table 11, although the attribute weights change slightly with varying the values of ε , the ranking results of alternatives remain unchanged all the time. This indicates the robustness of the proposed method.

6.2. Comparative Analysis

This section conducts comparative analyses from two aspects: one is to compare with existing NEV selection methods based on online reviews, and the other is to compare with MADM methods in the PLTS environment.

6.2.1. Comparation with Existing Car Selection Methods Based on Online Reviews

Comparing the proposed method with existing car selection methods [6,7,21], the comparative results are listed in Table 12.
According to Table 12, the proposed method has the following merits:
(1)
The proposed method crawls online reviews from multiple websites, while existing NEVs selection methods [6,7,21] obtained online reviews from only one website. As different consumers prefer distinct platforms and online reviews in a single platform is limited, it is advisable to derive online reviews from several websites. At this point, the evaluation information collected by the proposed method is more sufficient, which is helpful for determining reasonable decision results.
(2)
The evaluation information extracted from online reviews by the proposed method is more precise and reliable because it is represented as PLTSs with five-granularity linguistic terms. The method [7] applied q-rung orthopair fuzzy sets (q-ROFSs) to express evaluation information. However, q-ROFSs only express the proportions of positive and negative sentiments but failed to distinguish comments which are neutral sentiments or do not provide any evaluation. Although the method [6] can handle comments without any evaluations by hesitant intuitionistic fuzzy sets (HIFSs), it is unable to express the strength of positive and negative sentiments. Converting sentiment scores into memberships of alternatives, Ref. [21] delt with online reviews into hesitant probabilistic fuzzy sets (HPFSs). Compared with methods [6,7], the method [21] represented online reviews more smoothly. Nevertheless, the decision information may be distorted when online reviews are transformed into sentiment scores. The proposed method describes evaluation information by five-granularity PLTSs, including very positive, positive, neutral, negative and very negative linguistic terms, which can not only retain natural language forms of sentiment orientations in online reviews, but also reflex strengths of different sentiments with their proportions. Hence, evaluation information derived by the proposed method is more precise and reliable.

6.2.2. Comparison with MADM Methods in the PLTS Environment

To further show the merits of the proposed method, it is compared with existing MADM methods in the PLTS environment [37,38,39]. By applying these methods, the problems in MADM are solved step by step, and these problems are listed in Table 13.
Compared with methods [37,38,39], it can be observed from Table 13 that the proposed method has following advantages:
(1)
The decision information in the proposed method is more reliable because it is extracted from user online reviews on products. However, decision information in methods [37,38] are provided by several DMs. Due to the limited knowledge and experiences of DMs, the provided decision information may be limited and unable to represent general evaluations of most users.
(2)
The attribute weights in the proposed method are obtained by minimizing the uncertainty degrees of attributes as well as maximizing deviations between alternatives with respect to attributes. The method [37] gave attribute weights subjectively, which may greatly impact on alternative ranking orders. For example, when the attribute weights are assigned as the ones obtained by the proposed method, alternatives are sorted as A 4 A 5 A 1 A 3 A 2 . However, when attribute weights are given as w = ( 0.1964 , 0.0845 , 0.1000 , 0.1805 , 0.1482 , 0.1849 , 0.1055 ) , the ranking order is A 5 A 4 A 1 A 3 A 2 and the best alternative changes A 5 from A 4 . Although method [38] objectively determined attribute weights by maximizing deviation and method [39] fused AHP and deviation, they both neglected the uncertainty degrees of attributes. As decision information is based on online reviews in which there exists much uncertainty, it is more reasonable to consider uncertainty besides deviations between alternatives while deriving attribute weights.
(3)
The proposed method is feasible and has a stronger distinguishing power while ranking alternatives. It can be seen from Table 13 the that alternative ranking order by the proposed method is similar with those derived by methods [37,38,39] and the best alternative is A 4 . Furthermore, Observing Table 14, the Pearson correlation coefficients of alternative ranking orders are all greater than 0.9, which verifies the similarity of alternative ranking orders between the proposed method and existing ones. This illustrates the feasibility of the proposed method. Moreover, the distinguishing power of the proposed method is stronger, which can be described in Figure 5.

7. Conclusions

This paper addresses the NEV selection problems with online reviews and proposes a novel MADM method based on the sentiment analyses and the possible degree based D–S evidence theory in the PLTS environment. First, online customer reviews on NEVs are crawled from multiple websites and transformed into five granularity PLTSs with the sentiment analysis technique. The difference of this technique from existing ones is that it considers the degree adverb words and negation words besides sentiment words, so it retains the initial linguistic evaluations rather than computing sentiment scores. Afterwards, by maximizing the deviations between alternatives and minimizing the uncertainty, a bi-objective programming model is constructed to determine attribute weights objectively. Finally, a possible degree-based D–S evidence theory method in the PLTS context is proposed to rank the alternatives for each website. Then, a 0–1 programming model is built to fuse ranking orders in each website into a final one. Comparative analyses show the merits of the proposed method compared with existing ones.
Although the proposed method resolves the NEV selection problem and can be extended into other fields, such as hotel selection and education evaluation, it has some room for improvement. For example, the proposed method only considers the decision information in the online reviews without inviting experts into decision making. Although online reviews can provide abundant information, this type of information may not be professional enough. Therefore, in the future, we will fuse the online review with large-scale expert information [40] and develop a new methodology by considering the consensus between these two types of decision information.

Author Contributions

Methodology, Y.Z. and G.X.; validation, Y.Z.; writing—review and editing, Y.Z. and G.X; visualization, Y.Z.; supervision, G.X. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (No. 72261007).

Data Availability Statement

The data used in this article are taken from autohome.com, xcar.com and pcauto.com. The author confirms that all relevant data are included in these three websites.

Conflicts of Interest

The authors declare that they have no relevant interest conflicts that could influence the work of this paper.

Appendix A

Table A1. Examples of crawled online reviews of five NEVs.
Table A1. Examples of crawled online reviews of five NEVs.
AlternativesOnline Reviews
AION S ( A 1 )In terms of space, it is very spacious for household use. As a Class A car, it doesn’t look large visually, but the overall space utilization is excellent. Even with five people seated, it doesn’t feel crowded.
The cost-performance ratio of AION S is very high. I only spend 20 to 30 yuan each week on charging outside. Such travel costs are really attractive for an average family.
Tesla Model Y ( A 2 )The car has a large mouse-like body, providing a very spacious interior. It is evident that this car has been meticulously designed. The interior space is surprisingly large, especially in the front row, where you can stretch your legs straight out. However, the armrest box is too small, which is quite inconvenient.
With increased features at no additional cost, the cost-performance ratio is very high. It excels in all aspects and is also very good compared to others in the same class.
Great Wall Euler Good Cat ( A 3 )The interior space is very large. Compared to other models in the family, it is not inferior to other cars with larger exteriors. However, correspondingly, the large interior space has compressed the trunk space, making the trunk not very spacious.
However, after looking at three or four models, Euler Good Cat is the best choice within my acceptable range, in every aspect. Not only are the promotional activities substantial, but the service is also excellent.
BYD Han ( A 4 )Overall, I am quite satisfied with the space. It meets the needs for daily commuting and holiday trips. Especially the front row space and the legroom in the back row are quite spacious.
At this price, buying a BYD Han is a no-brainer, highly recommended. The driving comfort is truly on par with Mercedes. Now, 4S shops have test drive cars available, so you can experience it yourself. The power response is immediate, the soundproof glass is indeed quiet, and the rear space is ample. There are no discounts on the price, but this car is worth the price.
BYD Qin PLUS ( A 5 )The space is very large, enough for a family of four. The headroom in the back row is also very good, and even with a height of 1.78 m, it doesn’t feel cramped. The seats are very comfortable, with good softness and excellent support.
The overall cost-performance ratio is very high. After all, the price is quite reasonable, and the configurations in all aspects are very good compared to cars in the same class.
Table A2. Sentiment distribution of alternatives.
Table A2. Sentiment distribution of alternatives.
Alternatives G α i j ( 1 ) C 1 C 2 C 3 C 4 C 5 C 6 C 7
A 1 G 2 i j ( 1 ) 2021030
G 1 i j ( 1 ) 3414465
G 0 i j ( 1 ) 13107571422
G 1 i j ( 1 ) 19141517101623
G 2 i j ( 1 ) 48573858644635
Ø i j ( 1 ) 00220000
A 2 G 2 i j ( 1 ) 3312332
G 1 i j ( 1 ) 937781011
G 0 i j ( 1 ) 34212715242927
G 1 i j ( 1 ) 24242625253542
G 2 i j ( 1 ) 63816983735551
Ø i j ( 1 ) 1242121
A 3 G 2 i j ( 1 ) 1210121
G 1 i j ( 1 ) 6122028
G 0 i j ( 1 ) 18612971316
G 1 i j ( 1 ) 1415191382615
G 2 i j ( 1 ) 49645363714447
Ø i j ( 1 ) 0011111
A 4 G 2 i j ( 1 ) 2110102
G 1 i j ( 1 ) 42223212
G 0 i j ( 1 ) 971096925
G 1 i j ( 1 ) 1610151292424
G 2 i j ( 1 ) 73847382866741
Ø i j ( 1 ) 1140031
A 5 G 2 i j ( 1 ) 1022137
G 1 i j ( 1 ) 1551028
G 0 i j ( 1 ) 133789919
G 1 i j ( 1 ) 2317212291423
G 2 i j ( 1 ) 67796972867748
Ø i j ( 1 ) 0110000
Table A3. Sentimental proportion of alternatives.
Table A3. Sentimental proportion of alternatives.
AlternativeSentiment C 1 C 2 C 3 C 4 C 5 C 6 C 7
A 1 p ˜ 2 i j ( 1 ) 0.02350.00000.02350.01180.00000.03530.0000
p ˜ 1 i j ( 1 ) 0.03530.04710.01180.04710.04710.07060.0588
p ˜ 0 i j ( 1 ) 0.15290.11760.08240.05880.08240.16470.2588
p ˜ 1 i j ( 1 ) 0.22350.16470.17650.02000.11760.18820.2706
p ˜ 2 i j ( 1 ) 0.56470.67060.44710.68240.75290.54120.4118
p ˜ Ø i j ( 1 ) 0.00000.00000.25880.00000.00000.00000.0000
A 2 p ˜ 2 i j ( 1 ) 0.02240.02240.00750.01490.02240.02240.0149
p ˜ 1 i j ( 1 ) 0.06720.02240.05220.05220.05970.07460.0821
p ˜ 0 i j ( 1 ) 0.25370.15670.20150.11190.17910.21640.2015
p ˜ 1 i j ( 1 ) 0.17910.17910.19400.18660.18660.26120.3134
p ˜ 2 i j ( 1 ) 0.47010.60450.51490.61940.54480.41040.3806
p ˜ Ø i j ( 1 ) 0.00750.01490.02990.01490.00750.01490.0075
A 3 p ˜ 2 i j ( 1 ) 0.01140.02270.01140.00000.01140.02270.0114
p ˜ 1 i j ( 1 ) 0.06820.01140.02270.02270.00000.02270.0909
p ˜ 0 i j ( 1 ) 0.20450.06820.13640.10230.07950.14770.1818
p ˜ 1 i j ( 1 ) 0.15910.17050.21590.14770.09090.29550.1705
p ˜ 2 i j ( 1 ) 0.55680.72730.60230.71590.80680.50000.5341
p ˜ Ø i j ( 1 ) 0.00000.00000.01140.01140.01140.01140.0114
A 4 p ˜ 2 i j ( 1 ) 0.01900.00950.00950.00000.00950.00000.0190
p ˜ 1 i j ( 1 ) 0.03810.01900.01900.01900.02860.01900.1143
p ˜ 0 i j ( 1 ) 0.08570.06670.09520.08570.05710.08570.2381
p ˜ 1 i j ( 1 ) 0.15240.09520.14290.11430.08570.22860.2286
p ˜ 2 i j ( 1 ) 0.69520.80000.69520.78100.81900.63810.3905
p ˜ Ø i j ( 1 ) 0.00950.00950.03810.00000.00000.02860.0095
A 5 p ˜ 2 i j ( 1 ) 0.00950.00000.01900.01900.00950.02860.0667
p ˜ 1 i j ( 1 ) 0.00950.04760.04760.00950.00000.01900.0762
p ˜ 0 i j ( 1 ) 0.12380.02860.06670.07620.08570.08570.1810
p ˜ 1 i j ( 1 ) 0.21900.16190.20000.20950.08570.13330.2190
p ˜ 2 i j ( 1 ) 0.63810.75240.65710.68570.81900.73330.4571
p ˜ Ø i j ( 1 ) 0.00000.00950.00950.00000.00000.00000.0000
Matrix A1
D ( 1 ) = { s 2 ( 0.0235 ) , s 1 ( 0.0353 ) , s 0 ( 0.1529 ) , s 1 ( 0.2235 ) , s 2 ( 0.5647 ) } { s 1 ( 0.0471 ) , s 0 ( 0.1176 ) , s 1 ( 0.1647 ) , s 2 ( 0.6706 ) } { s 2 ( 0.0753 ) , s 1 ( 0.0635 ) , s 0 ( 0.1341 ) , s 1 ( 0.2282 ) , s 2 ( 0.4988 ) } { s 2 ( 0.0118 ) , s 1 ( 0.0471 ) , s 0 ( 0.0588 ) , s 1 ( 0.2000 ) , s 2 ( 0.6824 ) } { s 1 ( 0.0471 ) , s 0 ( 0.0824 ) , s 1 ( 0.1176 ) , s 2 ( 0.7529 ) } { s 2 ( 0.0353 ) , s 1 ( 0.0706 ) , s 0 ( 0.1647 ) , s 1 ( 0.1882 ) , s 2 ( 0.5412 ) } { s 1 ( 0.0588 ) , s 0 ( 0.2588 ) , s 1 ( 0.2706 ) , s 2 ( 0.4118 ) } { s 2 ( 0.0239 ) , s 1 ( 0.0687 ) , s 0 ( 0.2552 ) , s 1 ( 0.1806 ) , s 2 ( 0.4716 ) } { s 2 ( 0.0254 ) , s 1 ( 0.0254 ) , s 0 ( 0.1597 ) , s 1 ( 0.1821 ) , s 2 ( 0.6075 ) } { s 2 ( 0.0134 ) , s 1 ( 0.0582 ) , s 0 ( 0.2075 ) , s 1 ( 0.2000 ) , s 2 ( 0.5209 ) } { s 2 ( 0.0179 ) , s 1 ( 0.0552 ) , s 0 ( 0.1149 ) , s 1 ( 0.1896 ) , s 2 ( 0.6224 ) } { s 2 ( 0.0239 ) , s 1 ( 0.0612 ) , s 0 ( 0.1806 ) , s 1 ( 0.1881 ) , s 2 ( 0.5463 ) } { s 2 ( 0.0254 ) , s 1 ( 0.0776 ) , s 0 ( 0.2194 ) , s 1 ( 0.2642 ) , s 2 ( 0.4134 ) } { s 2 ( 0.0164 ) , s 1 ( 0.0836 ) , s 0 ( 0.2030 ) , s 1 ( 0.3149 ) , s 2 ( 0.3821 ) } { s 2 ( 0.0114 ) , s 1 ( 0.0682 ) , s 0 ( 0.2045 ) , s 1 ( 0.1591 ) , s 2 ( 0.5568 ) } { s 2 ( 0.0227 ) , s 1 ( 0.0114 ) , s 0 ( 0.0682 ) , s 1 ( 0.1705 ) , s 2 ( 0.7273 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0250 ) , s 0 ( 0.1386 ) , s 1 ( 0.2182 ) , s 2 ( 0.6045 ) } { s 2 ( 0.0023 ) , s 1 ( 0.0250 ) , s 0 ( 0.1045 ) , s 1 ( 0.1500 ) , s 2 ( 0.7182 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0023 ) , s 0 ( 0.0818 ) , s 1 ( 0.0932 ) , s 2 ( 0.8091 ) } { s 2 ( 0.0250 ) , s 1 ( 0.0250 ) , s 0 ( 0.1500 ) , s 1 ( 0.2977 ) , s 2 ( 0.5023 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0932 ) , s 0 ( 0.1841 ) , s 1 ( 0.1727 ) , s 2 ( 0.5364 ) } { s 2 ( 0.0210 ) , s 1 ( 0.0400 ) , s 0 ( 0.0876 ) , s 1 ( 0.1543 ) , s 2 ( 0.6971 ) } { s 2 ( 0.0114 ) , s 1 ( 0.0210 ) , s 0 ( 0.0686 ) , s 1 ( 0.0971 ) , s 2 ( 0.8019 ) } { s 2 ( 0.0171 ) , s 1 ( 0.0267 ) , s 0 ( 0.1029 ) , s 1 ( 0.1505 ) , s 2 ( 0.7029 ) } { s 1 ( 0.0190 ) , s 0 ( 0.0857 ) , s 1 ( 0.1143 ) , s 2 ( 0.7810 ) } { s 2 ( 0.0095 ) , s 1 ( 0.0286 ) , s 0 ( 0.0571 ) , s 1 ( 0.0857 ) , s 2 ( 0.8190 ) } { s 2 ( 0.0057 ) , s 1 ( 0.0248 ) , s 0 ( 0.0914 ) , s 1 ( 0.2343 ) , s 2 ( 0.6438 ) } { s 2 ( 0.0210 ) , s 1 ( 0.1162 ) , s 0 ( 0.2400 ) , s 1 ( 0.2305 ) , s 2 ( 0.3924 ) } { s 2 ( 0.0095 ) , s 1 ( 0.0095 ) , s 0 ( 0.1238 ) , s 1 ( 0.2190 ) , s 2 ( 0.6381 ) } { s 2 ( 0.0019 ) , s 1 ( 0.0495 ) , s 0 ( 0.0305 ) , s 1 ( 0.1638 ) , s 2 ( 0.7543 ) } { s 2 ( 0.0210 ) , s 1 ( 0.0495 ) , s 0 ( 0.0686 ) , s 1 ( 0.2019 ) , s 2 ( 0.6590 ) } { s 2 ( 0.0190 ) , s 1 ( 0.0095 ) , s 0 ( 0.0762 ) , s 1 ( 0.2095 ) , s 2 ( 0.6857 ) } { s 2 ( 0.0095 ) , s 0 ( 0.0857 ) , s 1 ( 0.0857 ) , s 2 ( 0.8190 ) } { s 2 ( 0.0286 ) , s 1 ( 0.0190 ) , s 0 ( 0.0857 ) , s 1 ( 0.1333 ) , s 2 ( 0.7333 ) } { s 2 ( 0.0667 ) , s 1 ( 0.0762 ) , s 0 ( 0.1810 ) , s 1 ( 0.2190 ) , s 2 ( 0.4571 ) }
Matrix A2
D w ( 1 ) = { s 2 ( 0.0235 ) , s 1.83 ( 0.0353 ) , s 1.60 ( 0.1529 ) , s 1.25 ( 0.2235 ) , s 2 ( 0.5647 ) } { s 1.87 ( 0.0471 ) , s 1.69 ( 0.1176 ) , s 1.40 ( 0.1647 ) , s 2 ( 0.6706 ) } { s 2 ( 0.0753 ) , s 1.83 ( 0.0635 ) , s 1.59 ( 0.1341 ) , s 1.23 ( 0.2282 ) , s 2 ( 0.4988 ) } { s 2 ( 0.0118 ) , s 1.88 ( 0.0471 ) , s 1.71 ( 0.0588 ) , s 1.44 ( 0.2000 ) , s 2 ( 0.6824 ) } { s 1.80 ( 0.0471 ) , s 1.55 ( 0.0824 ) , s 1.14 ( 0.1176 ) , s 2 ( 0.7529 ) } { s 2 ( 0.0353 ) , s 1.79 ( 0.0706 ) , s 1.51 ( 0.1647 ) , s 1.08 ( 0.1882 ) , s 2 ( 0.5412 ) } { s 1.88 ( 0.0588 ) , s 1.72 ( 0.2588 ) , s 1.45 ( 0.2706 ) , s 2 ( 0.4118 ) } { s 2 ( 0.0239 ) , s 1.83 ( 0.0687 ) , s 1.60 ( 0.2552 ) , s 1.25 ( 0.1806 ) , s 2 ( 0.4716 ) } { s 2 ( 0.0254 ) , s 1.87 ( 0.0254 ) , s 1.69 ( 0.1597 ) , s 1.40 ( 0.1821 ) , s 2 ( 0.6075 ) } { s 2 ( 0.0134 ) , s 1.83 ( 0.0582 ) , s 1.59 ( 0.2075 ) , s 1.23 ( 0.2000 ) , s 2 ( 0.5209 ) } { s 2 ( 0.0179 ) , s 1.88 ( 0.0552 ) , s 1.71 ( 0.1149 ) , s 1.44 ( 0.1896 ) , s 2 ( 0.6224 ) } { s 2 ( 0.0239 ) , s 1.80 ( 0.0612 ) , s 1.55 ( 0.1806 ) , s 1.14 ( 0.1881 ) , s 2 ( 0.5463 ) } { s 2 ( 0.0254 ) , s 1.79 ( 0.0776 ) , s 1.51 ( 0.2194 ) , s 1.08 ( 0.2642 ) , s 2 ( 0.4134 ) } { s 2 ( 0.0164 ) , s 1.88 ( 0.0836 ) , s 1.72 ( 0.2030 ) , s 1.45 ( 0.3149 ) , s 2 ( 0.3821 ) } { s 2 ( 0.0114 ) , s 1.83 ( 0.0682 ) , s 1.60 ( 0.2045 ) , s 1.25 ( 0.1591 ) , s 2 ( 0.5568 ) } { s 2 ( 0.0227 ) , s 1.87 ( 0.0114 ) , s 1.69 ( 0.0682 ) , s 1.40 ( 0.1705 ) , s 2 ( 0.7273 ) } { s 2 ( 0.0136 ) , s 1.83 ( 0.0250 ) , s 1.59 ( 0.1386 ) , s 1.23 ( 0.2182 ) , s 2 ( 0.6045 ) } { s 2 ( 0.0023 ) , s 1.88 ( 0.0250 ) , s 1.71 ( 0.1045 ) , s 1.44 ( 0.1500 ) , s 2 ( 0.7182 ) } { s 2 ( 0.0136 ) , s 1.80 ( 0.0023 ) , s 1.55 ( 0.0818 ) , s 1.14 ( 0.0932 ) , s 2 ( 0.8091 ) } { s 2 ( 0.0250 ) , s 1.79 ( 0.0250 ) , s 1.51 ( 0.1500 ) , s 1.08 ( 0.2977 ) , s 2 ( 0.5023 ) } { s 2 ( 0.0136 ) , s 1.88 ( 0.0932 ) , s 1.72 ( 0.1841 ) , s 1.45 ( 0.1727 ) , s 2 ( 0.5364 ) } { s 2 ( 0.0210 ) , s 1.83 ( 0.0400 ) , s 1.60 ( 0.0876 ) , s 1.25 ( 0.1543 ) , s 2 ( 0.6971 ) } { s 2 ( 0.0114 ) , s 1.87 ( 0.0210 ) , s 1.69 ( 0.0686 ) , s 1.40 ( 0.0971 ) , s 2 ( 0.8019 ) } { s 2 ( 0.0171 ) , s 1.83 ( 0.0267 ) , s 1.59 ( 0.1029 ) , s 1.23 ( 0.1505 ) , s 2 ( 0.7029 ) } { s 1.88 ( 0.0190 ) , s 1.71 ( 0.0857 ) , s 1.44 ( 0.1143 ) , s 2 ( 0.7810 ) } { s 2 ( 0.0095 ) , s 1.80 ( 0.0286 ) , s 1.55 ( 0.0571 ) , s 1.14 ( 0.0857 ) , s 2 ( 0.8190 ) } { s 2 ( 0.0057 ) , s 1.79 ( 0.0248 ) , s 1.51 ( 0.0914 ) , s 1.08 ( 0.2343 ) , s 2 ( 0.6438 ) } { s 2 ( 0.0210 ) , s 1.88 ( 0.1162 ) , s 1.72 ( 0.2400 ) , s 1.45 ( 0.2305 ) , s 2 ( 0.3924 ) } { s 2 ( 0.0095 ) , s 1.83 ( 0.0095 ) , s 1.60 ( 0.1238 ) , s 1.25 ( 0.2190 ) , s 2 ( 0.6381 ) } { s 2 ( 0.0019 ) , s 1.87 ( 0.0495 ) , s 1.69 ( 0.0305 ) , s 1.40 ( 0.1638 ) , s 2 ( 0.7543 ) } { s 2 ( 0.0210 ) , s 1.83 ( 0.0495 ) , s 1.59 ( 0.0686 ) , s 1.23 ( 0.2019 ) , s 2 ( 0.6590 ) } { s 2 ( 0.0190 ) , s 1.88 ( 0.0095 ) , s 1.71 ( 0.0762 ) , s 1.44 ( 0.2095 ) , s 2 ( 0.6857 ) } { s 2 ( 0.0095 ) , s 1.55 ( 0.0857 ) , s 1.14 ( 0.0857 ) , s 2 ( 0.8190 ) } { s 2 ( 0.0286 ) , s 1.79 ( 0.0190 ) , s 1.51 ( 0.0857 ) , s 1.08 ( 0.1333 ) , s 2 ( 0.7333 ) } { s 2 ( 0.0667 ) , s 1.88 ( 0.0762 ) , s 1.72 ( 0.1810 ) , s 1.45 ( 0.2190 ) , s 2 ( 0.4571 ) }

References

  1. Dong, J.; Wan, S. Type-2 interval-valued intuitionstic fuzzy matrix game and application to energy vehicle industry development. Expert Syst. Appl. 2024, 249, 123398. [Google Scholar] [CrossRef]
  2. Dwivedi, P.P.; Sharma, D.K. Evaluation and ranking of battery electric vehicles by Shannon’s entropy and TOPSIS methods. Math. Comput. Simul. 2023, 212, 457–474. [Google Scholar] [CrossRef]
  3. Meng, W.; Ma, M.; Li, Y.; Huang, B. New energy vehicle R&D strategy with supplier capital constraints under China’s dual credit policy. Energy Policy 2022, 168, 113099. [Google Scholar]
  4. Yu, S.; Zhang, X.; Du, Z.; Chen, Y. A new multi-attribute decision making method for overvalued star ratings adjustment and its application in new energy vehicle selection. Mathematics 2023, 11, 2037. [Google Scholar] [CrossRef]
  5. Li, W.; Li, Y. Automotive product ranking method considering individual standard differences in online reviews. Syst. Eng. 2021, 39, 143–152. [Google Scholar]
  6. Tian, Z.; Liang, H.; Nie, R.; Wang, X.; Wang, J. Data-driven multi-criteria decision support method for electric vehicle selection. Comput. Ind. Eng. 2023, 177, 109061. [Google Scholar] [CrossRef]
  7. Yang, Z.; Li, Q.; Charles, V.; Xu, B.; Gupta, S. Supporting personalized new energy vehicle purchase decision-making: Customer reviews and product recommendation platform. Int. J. Prod. Econ. 2023, 265, 109003. [Google Scholar] [CrossRef]
  8. Xie, W.; Xu, Z.; Ren, Z.; Wang, H. Probabilistic linguistic analytic hierarchy process and its application on the performance assessment of Xiongan new area. Int. J. Inf. Technol. Decis. Mak. 2018, 17, 1693–1724. [Google Scholar] [CrossRef]
  9. Wan, S.; Gao, S.; Dong, J. Trapezoidal cloud based heterogeneous multi-criterion group decision-making for container multimodal transport path selection. Appl. Soft Comput. 2024, 154, 111374. [Google Scholar] [CrossRef]
  10. Wang, X.; Wang, J.; Zhang, H. Distance-based multicriteria group decision-making approach with probabilistic linguistic term sets. Expert Syst. 2019, 36, e12352. [Google Scholar] [CrossRef]
  11. Yan, J.; Dong, J.; Wan, S.; Gao, Y. A quantum probability theory-based method for heterogeneous multi-criteria group decision making with incomplete probabilistic linguistic preference relations considering the interference effect among decision makers. J. Oper. Res. Soc. 2024, 1–27. [Google Scholar] [CrossRef]
  12. Liu, P.; Zhang, X. A new hesitant fuzzy linguistic approach for multiple attribute decision making based on Dempster–Shafer evidence theory. Appl. Soft Comput. 2020, 86, 105897. [Google Scholar] [CrossRef]
  13. Xiao, F. EFMCDM: Evidential fuzzy multicriteria decision making based on belief entropy. IEEE Trans. Fuzzy Syst. 2019, 28, 1477–1491. [Google Scholar] [CrossRef]
  14. Xiao, F.; Wen, J.; Pedrycz, W. Generalized divergence-based decision making method with an application to pattern classification. IEEE Trans. Knowl. Data Eng. 2022, 35, 6941–6956. [Google Scholar] [CrossRef]
  15. Liu, Z.; Bi, Y.; Liu, P. A conflict elimination-based model for failure mode and effect analysis: A case application in medical waste management system. Comput. Ind. Eng. 2023, 178, 109145. [Google Scholar] [CrossRef]
  16. Nie, R.; Tian, Z.; Wang, J.; Chin, K. Hotel selection driven by online textual reviews: Applying a semantic partitioned sentiment dictionary and evidence theory. Int. J. Hosp. Manag. 2020, 88, 102495. [Google Scholar] [CrossRef]
  17. Wang, J.; Xu, L.; Cai, J.; Fu, Y.; Bian, X. Offshore wind turbine selection with a novel multi-criteria decision-making method based on Dempster-Shafer evidence theory. Sustain. Energy Technol. Assess. 2022, 51, 101951. [Google Scholar] [CrossRef]
  18. Ziemba, P. Multi-criteria approach to stochastic and fuzzy uncertainty in the selection of electric vehicles with high social acceptance. Expert Syst. Appl. 2021, 173, 114686. [Google Scholar] [CrossRef]
  19. Huang, T.; Tang, X.; Zhao, S.; Zhang, Q.; Pedrycz, W. Linguistic information-based granular computing based on a tournament selection operator-guided PSO for supporting multi-attribute group decision-making with distributed linguistic preference relations. Inf. Sci. 2022, 610, 488–507. [Google Scholar] [CrossRef]
  20. Tversky, A.; Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  21. Liu, D.; Xu, J.; Du, Y. An integrated HPF-TODIM-MULTIMOORA approach for car selection through online reviews. Ann. Oper. Res. 2024. [Google Scholar] [CrossRef]
  22. Chen, J.; Li, X. Doctors ranking through heterogeneous information: The new score functions considering patients’ emotional intensity. Expert Syst. Appl. 2023, 219, 119620. [Google Scholar] [CrossRef]
  23. Darko, A.; Liang, D.; Xu, Z.; Agbodah, K.; Obiora, S. A novel multi-attribute decision-making for ranking mobile payment services using online consumer reviews. Expert Syst. Appl. 2023, 213, 119262. [Google Scholar] [CrossRef] [PubMed]
  24. Yang, Z.; Li, Q.; Islam, N.; Han, C.; Gupta, S. Product Attribute and Heterogeneous Sentiment Analysis-Based Evaluation to Support Online Personalized Consumption Decisions. IEEE Trans. Eng. Manag. 2024, 71, 11198–11211. [Google Scholar] [CrossRef]
  25. Zhao, M.; Li, L.; Xu, Z. Study on hotel selection method based on integrating online ratings and reviews from multi-websites. Inf. Sci. 2021, 572, 460–481. [Google Scholar] [CrossRef]
  26. Wan, S.; Wu, H.; Dong, J. An integrated method for complex heterogeneous multi-attribute group decision-making and application to photovoltaic power station site selection. Expert Syst. Appl. 2024, 242, 122456. [Google Scholar] [CrossRef]
  27. Pang, Q.; Wang, H.; Xu, Z. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  28. Gou, X.; Xu, Z. Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets. Inf. Sci. 2016, 372, 407–427. [Google Scholar] [CrossRef]
  29. Liao, H.; Jiang, L.; Lev, B.; Fujita, H. Novel operations of PLTSs based on the disparity degrees of linguistic terms and their use in designing the probabilistic linguistic ELECTRE III method. Appl. Soft Comput. 2019, 80, 450–464. [Google Scholar] [CrossRef]
  30. Wu, X.; Liao, H. A consensus-based probabilistic linguistic gained and lost dominance score method. Eur. J. Oper. Res. 2019, 272, 1017–1027. [Google Scholar] [CrossRef]
  31. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  32. Shafer, G. A Mathematical Theory of Evidence. Technometrics 1978, 20, 3–86. [Google Scholar]
  33. Fei, L.; Feng, Y.; Wang, H. Modeling heterogeneous multi-attribute emergency decision-making with Dempster-Shafer theory. Comput. Ind. Eng. 2021, 161, 107633. [Google Scholar] [CrossRef]
  34. Jiroušek, R.; Shenoy, P.P. A new definition of entropy of belief functions in the Dempster–Shafer theory. Int. J. Approx. Reason. 2018, 92, 49–65. [Google Scholar] [CrossRef]
  35. Lan, J.; Zou, H.; Hu, M. Dominance degrees for intervals and their application in multiple attribute decision-making. Fuzzy Sets Syst. 2020, 383, 146–164. [Google Scholar] [CrossRef]
  36. Zhang, M.; Li, G. Combining TOPSIS and GRA for supplier selection problem with interval numbers. J. Cent. South Univ. 2018, 25, 1116–1128. [Google Scholar] [CrossRef]
  37. Du, Y.; Liu, D. An integrated method for multi-granular probabilistic linguistic multiple attribute decision-making with prospect theory. Comput. Ind. Eng. 2021, 159, 107500. [Google Scholar] [CrossRef]
  38. Li, P.; Wei, C. An emergency decision-making method based on D-S evidence theory for probabilistic linguistic term sets. Int. J. Disaster Risk Reduct. 2019, 37, 101178. [Google Scholar] [CrossRef]
  39. Liang, D.; Dai, Z.; Wang, M. Assessing customer satisfaction of O2O takeaway based on online reviews by integrating fuzzy comprehensive evaluation with AHP and probabilistic linguistic term sets. Appl. Soft Comput. 2021, 98, 106847. [Google Scholar] [CrossRef]
  40. Chen, X.; Zhang, W.; Xu, X.; Cao, W. A public and large-scale expert information fusion method and its application: Mining public opinion via sentiment analysis and measuring public dynamic reliability. Inf. Fusion 2022, 78, 71–85. [Google Scholar] [CrossRef]
Figure 1. The confidence and plausibility values of alternatives A 1 and A 3 .
Figure 1. The confidence and plausibility values of alternatives A 1 and A 3 .
Mathematics 13 00583 g001
Figure 2. Supporting interval and trust interval of an alternative.
Figure 2. Supporting interval and trust interval of an alternative.
Mathematics 13 00583 g002
Figure 3. Procedure for proposed methodology.
Figure 3. Procedure for proposed methodology.
Mathematics 13 00583 g003
Figure 4. Comments of a review in website t 1 .
Figure 4. Comments of a review in website t 1 .
Mathematics 13 00583 g004
Figure 5. Ranking order values of alternatives.
Figure 5. Ranking order values of alternatives.
Mathematics 13 00583 g005
Table 1. Mass function values in Example 1.
Table 1. Mass function values in Example 1.
m j ( { A 1 } ) m j ( { A 2 } ) m j ( { A 3 } ) m j ( { A 4 } ) m j ( { A 1 , A 2 } ) m j ( { A 3 , A 4 } ) m j ( Θ )
C 1 0.20310.09500.19010.10670.20160.12310.0804
C 2 0.16110.09340.20900.22080.11670.10920.0898
Table 2. Mass function values in Example 2.
Table 2. Mass function values in Example 2.
m j ( { A 1 } ) m j ( { A 2 } ) m j ( { A 3 } ) m j ( { A 4 } ) m j ( { A 1 , A 2 } ) m j ( { A 3 , A 4 } ) m j ( Θ )
C 1 0.413100.19500.146900.15310.0919
C 2 0.273000.17510.270700.21920.0620
Table 3. Main notations appearing in the sentimental analyses.
Table 3. Main notations appearing in the sentimental analyses.
NotationExplanation
S = { s η , , s 0 , , s η } The set of linguistic term. where s α ( α = η , , 0 , , η ) .
u i j π ( k ) The π th review from the k th website revaluating the i th alternative with respect to j th attribute.
P i j π * ( k ) The number of positive sentiment words in the π th reviews for attribute C j of alternative A i for website t k .
N i j π * ( k ) The number of negative sentiment words in the π th reviews for attribute C j of alternative A i for website t k .
P D i j π * ( k ) The number of positive degree adverb words in the π th reviews for attribute C j of alternative A i for website t k .
N D i j π * ( k ) The number of negative degree adverb words in the π th reviews for attribute C j of alternative A i for website t k .
Ø i j ( k ) The number of users who have missing evaluations the attribute C j of alternative A i for website t k .
Table 4. An example of calculating d i s ( P V , D G ) .
Table 4. An example of calculating d i s ( P V , D G ) .
ReviewClassification of WordsThe Examples of Degree WordsThe Sentimental Orientation of Words
The space inside the car is quite spacious. As soon as you get into the car, you will feel very light and bright. Whether you sit in the front or the back, the useful space is very gelivable and very large. I, 1.78 m, can have such experiences, so it is no problems for other people. It is not crowded at all for three people sitting in the back.Positive sentiment words: spacious, light and bright, gelivable, large.
Negative sentiment words: problem, crowded
Degree words: quite, very1, very2, very3.
d i s ( P V , D G ) : d i s ( P V , q u i t e ) d i s ( s p a c i o u s , q u i t e ) = 0 , d i s ( l i g h t   a n d   b r i g h t , q u i t e ) = 13 , d i s ( g e l i v a b l e , q u i t e ) = 30 , d i s ( l a r g e , q u i t e ) = 33 min ( d i s ( P V , q u i t e ) ) = 0 < min ( d i s ( N V , q u i t e ) ) = 45
The degree adverb “quite” is regarded as a positive degree ( P D ) word
d i s ( N V , D G ) : d i s ( N V , q u i t e ) d i s ( p r o b l e m , q u i t e ) = 45 , d i s ( c r o w d e d , q u i t e ) = 52
Table 5. Classification illustrating the sentiment orientation of reviews.
Table 5. Classification illustrating the sentiment orientation of reviews.
Review SymbolsReviewThe Sentiment Orientation of WordQuantity ComparisonReview Orientation
u 4 , 1 , 5 ( 2 ) The space inside the car is quite spacious. As soon as you get into the car, you will feel very light and bright. Whether you sit in the front or the back, the useful space is very gelivable and very large. I, 1.78 m, can have such experiences, so it is no problems for other people. It is not crowded at all for three people sitting in the back. P 4 , 1 , 5 ( 2 ) : spacious, light and bright, gelivable, large
N 4 , 1 , 5 ( 2 ) : problem, crowded
P D 4 , 1 , 5 ( 2 ) : quite, very1, very2, very3
N D 4 , 1 , 5 ( 2 ) : at all
D N 4 , 1 , 5 ( 2 ) : no, not
P 4 , 1 , 5 ( 2 ) = 8 > N 4 , 1 , 5 ( 2 ) = 2 P D 4 , 1 , 5 ( 2 ) = 4 > N D 4 , 1 , 5 ( 2 ) = 1 s 2 (more like)
u 1 , 3 , 9 ( 1 ) The technology configuration is average, but the screen resolution is okay. While surfing the internet, the configuration is not smooth enough, and it is stuck sometimes. But the navigation is more convenient than that of the mobile phone. P 1 , 3 , 9 ( 1 ) : is okay, smooth, convenient
N 1 , 3 , 9 ( 1 ) : stuck
D N 1 , 3 , 9 ( 1 ) : not
P 1 , 3 , 9 ( 1 ) = 3 = N 1 , 3 , 9 ( 1 ) = 3 P D 1 , 3 , 9 ( 1 ) = 0 = N D 1 , 3 , 9 ( 1 ) = 0 s 0 (general)
u 5 , 1 , 8 ( 3 ) The space in the front row is good, but the back row is very crowded. It is hard for adults to sit in the back row. The trunk is so small that some gift boxes cannot be held during the Chinese New Year. P 5 , 1 , 8 ( 3 ) : good
N 5 , 1 , 8 ( 3 ) : crowded, so small, can’t be held
N D 4 , 1 , 5 ( 2 ) : very, hard
P 5 , 1 , 8 ( 3 ) = 1 < N 5 , 1 , 8 ( 3 ) = 3 P D 5 , 1 , 8 ( 3 ) = 0 < N D 5 , 1 , 8 ( 3 ) = 2 s 2 (more annoying)
Table 6. Confidence and plausibility values in Example 3.
Table 6. Confidence and plausibility values in Example 3.
{ A 1 } { A 2 } { A 3 } { A 4 } { A 1 , A 2 } { A 3 , A 4 } Θ
B e l 0.25220.11510.25220.18840.47440.51051
P l 0.37450.23740.33730.27350.48960.5257
Table 7. Distribution of number of reviews.
Table 7. Distribution of number of reviews.
Alternatives C 1 C 2 C 3 C 4 C 5 C 6 C 7
A 1 85856385858585
A 2 133132130132133132133
A 3 88888787878787
A 4 104104101105105102104
A 5 105104104105105105105
Table 8. The standardized PLTS.
Table 8. The standardized PLTS.
Attribute A 1 A 2 A 3 A 4 A 5
C 1 { s 2 ( 0.0235 ) , s 1 ( 0.0353 ) , s 0 ( 0.1529 ) , s 1 ( 0.2235 ) , s 2 ( 0.5647 ) } { s 2 ( 0.0239 ) , s 1 ( 0.0687 ) , s 0 ( 0.2552 ) , s 1 ( 0.1806 ) , s 2 ( 0.4716 ) } { s 2 ( 0.0114 ) , s 1 ( 0.0682 ) , s 0 ( 0.2045 ) , s 1 ( 0.1591 ) , s 2 ( 0.5568 ) } { s 2 ( 0.0210 ) , s 1 ( 0.0400 ) , s 0 ( 0.0876 ) , s 1 ( 0.1543 ) , s 2 ( 0.6971 ) } { s 2 ( 0.0095 ) , s 1 ( 0.0095 ) , s 0 ( 0.1238 ) , s 1 ( 0.2190 ) , s 2 ( 0.6381 ) }
C 2 { s 1 ( 0.0471 ) , s 0 ( 0.1176 ) , s 1 ( 0.1647 ) , s 2 ( 0.6706 ) } { s 2 ( 0.0254 ) , s 1 ( 0.0254 ) , s 0 ( 0.1597 ) , s 1 ( 0.1821 ) , s 2 ( 0.6075 ) } { s 2 ( 0.0227 ) , s 1 ( 0.0114 ) , s 0 ( 0.0682 ) , s 1 ( 0.1705 ) , s 2 ( 0.7273 ) } { s 2 ( 0.0114 ) , s 1 ( 0.0210 ) , s 0 ( 0.0686 ) , s 1 ( 0.0971 ) , s 2 ( 0.8019 ) } { s 2 ( 0.0019 ) , s 1 ( 0.0495 ) , s 0 ( 0.0305 ) , s 1 ( 0.1638 ) , s 2 ( 0.7543 ) }
C 3 { s 2 ( 0.0753 ) , s 1 ( 0.0635 ) , s 0 ( 0.1341 ) , s 1 ( 0.2282 ) , s 2 ( 0.4988 ) } { s 2 ( 0.0134 ) , s 1 ( 0.0582 ) , s 0 ( 0.2075 ) , s 1 ( 0.2000 ) , s 2 ( 0.5209 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0250 ) , s 0 ( 0.1386 ) , s 1 ( 0.2182 ) , s 2 ( 0.6045 ) } { s 2 ( 0.0171 ) , s 1 ( 0.0267 ) , s 0 ( 0.1029 ) , s 1 ( 0.1505 ) , s 2 ( 0.7029 ) } { s 2 ( 0.0210 ) , s 1 ( 0.0495 ) , s 0 ( 0.0686 ) , s 1 ( 0.2019 ) , s 2 ( 0.6590 ) }
C 4 { s 2 ( 0.0118 ) , s 1 ( 0.0471 ) , s 0 ( 0.0588 ) , s 1 ( 0.2000 ) , s 2 ( 0.6824 ) } { s 2 ( 0.0179 ) , s 1 ( 0.0552 ) , s 0 ( 0.1149 ) , s 1 ( 0.1896 ) , s 2 ( 0.6224 ) } { s 2 ( 0.0023 ) , s 1 ( 0.0250 ) , s 0 ( 0.1045 ) , s 1 ( 0.1500 ) , s 2 ( 0.7182 ) } { s 1 ( 0.0190 ) , s 0 ( 0.0857 ) , s 1 ( 0.1143 ) , s 2 ( 0.7810 ) } { s 2 ( 0.0190 ) , s 1 ( 0.0095 ) , s 0 ( 0.0762 ) , s 1 ( 0.2095 ) , s 2 ( 0.6857 ) }
C 5 { s 1 ( 0.0471 ) , s 0 ( 0.0824 ) , s 1 ( 0.1176 ) , s 2 ( 0.7529 ) } { s 2 ( 0.0239 ) , s 1 ( 0.0612 ) , s 0 ( 0.1806 ) , s 1 ( 0.1881 ) , s 2 ( 0.5463 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0023 ) , s 0 ( 0.0818 ) , s 1 ( 0.0932 ) , s 2 ( 0.8091 ) } { s 2 ( 0.0095 ) , s 1 ( 0.0286 ) , s 0 ( 0.0571 ) , s 1 ( 0.0857 ) , s 2 ( 0.8190 ) } { s 2 ( 0.0095 ) , s 0 ( 0.0857 ) , s 1 ( 0.0857 ) , s 2 ( 0.8190 ) }
C 6 { s 2 ( 0.0353 ) , s 1 ( 0.0706 ) , s 0 ( 0.1647 ) , s 1 ( 0.1882 ) , s 2 ( 0.5412 ) } { s 2 ( 0.0254 ) , s 1 ( 0.0776 ) , s 0 ( 0.2194 ) , s 1 ( 0.2642 ) , s 2 ( 0.4134 ) } { s 2 ( 0.0250 ) , s 1 ( 0.0250 ) , s 0 ( 0.1500 ) , s 1 ( 0.2977 ) , s 2 ( 0.5023 ) } { s 2 ( 0.0057 ) , s 1 ( 0.0248 ) , s 0 ( 0.0914 ) , s 1 ( 0.2343 ) , s 2 ( 0.6438 ) } { s 2 ( 0.0286 ) , s 1 ( 0.0190 ) , s 0 ( 0.0857 ) , s 1 ( 0.1333 ) , s 2 ( 0.7333 ) }
C 7 { s 1 ( 0.0588 ) , s 0 ( 0.2588 ) , s 1 ( 0.2706 ) , s 2 ( 0.4118 ) } { s 2 ( 0.0164 ) , s 1 ( 0.0836 ) , s 0 ( 0.2030 ) , s 1 ( 0.3149 ) , s 2 ( 0.3821 ) } { s 2 ( 0.0136 ) , s 1 ( 0.0932 ) , s 0 ( 0.1841 ) , s 1 ( 0.1727 ) , s 2 ( 0.5364 ) } { s 2 ( 0.0210 ) , s 1 ( 0.1162 ) , s 0 ( 0.2400 ) , s 1 ( 0.2305 ) , s 2 ( 0.3924 ) } { s 2 ( 0.0667 ) , s 1 ( 0.0762 ) , s 0 ( 0.1810 ) , s 1 ( 0.2190 ) , s 2 ( 0.4571 ) }
Table 9. BPAs of the alternatives with respect to attributes.
Table 9. BPAs of the alternatives with respect to attributes.
Alternative C 1 C 2 C 3 C 4 C 5 C 6 C 7
A 1 0.16720.16280.14730.16830.17220.16350.1651
A 2 0.14320.14940.15360.15490.13300.13580.1557
A 3 0.16360.17490.17430.17570.18280.15870.2028
A 4 0.19760.18960.19580.18890.18430.19090.1565
A 5 0.18560.18050.18620.16940.18470.20840.1769
Θ 0.14280.14290.14280.14290.14280.14280.1429
Table 10. The confidence values and plausibility values of alternatives.
Table 10. The confidence values and plausibility values of alternatives.
Value A 1 A 2 A 3 A 4 A 5
B e l ( 1 ) ( A i ) 0.16120.10710.21190.26400.2550
P l ( 1 ) ( A i ) 0.16200.10780.21270.26480.2558
Table 11. Sensitivity analysis results of the balancing coefficient ε .
Table 11. Sensitivity analysis results of the balancing coefficient ε .
Parameter SettingsAttribute WeightsAlternative Rankings
ε = 0 w ¯ ( 1 ) = ( 0.1493 , 0.1218 , 0.1532 , 0.1151 , 0.1683 , 0.1801 , 0.1123 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.1 w ¯ ( 1 ) = ( 0.1500 , 0.1180 , 0.1546 , 0.1105 , 0.1726 , 0.1870 , 0.1073 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.2 w ¯ ( 1 ) = ( 0.1508 , 0.1118 , 0.1566 , 0.1029 , 0.1798 , 0.1987 , 0.0993 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.3 w ¯ ( 1 ) = ( 0.1514 , 0.1019 , 0.1591 , 0.0913 , 0.1911 , 0.2181 , 0.0871 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.4 w ¯ ( 1 ) = ( 0.1514 , 0.0997 , 0.1596 , 0.0888 , 0.1935 , 0.2225 , 0.0845 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.5 w ¯ ( 1 ) = ( 0.1502 , 0.1168 , 0.1550 , 0.1089 , 0.1741 , 0.1893 , 0.1057 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.6 w ¯ ( 1 ) = ( 0.1476 , 0.1290 , 0.1501 , 0.1242 , 0.1597 , 0.1671 , 0.1222 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.7 w ¯ ( 1 ) = ( 0.1457 , 0.1353 , 0.1470 , 0.1326 , 0.1521 , 0.1559 , 0.1314 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.8 w ¯ ( 1 ) = ( 0.1444 , 0.1389 , 0.1451 , 0.1375 , 0.1477 , 0.1496 , 0.1369 ) T A 4 A 5 A 3 A 1 A 2
ε = 0.9 w ¯ ( 1 ) = ( 0.1435 , 0.1413 , 0.1438 , 0.1406 , 0.1448 , 0.1456 , 0.1404 ) T A 4 A 5 A 3 A 1 A 2
Table 12. The comparison results with existing NEV selection methods based on online reviews.
Table 12. The comparison results with existing NEV selection methods based on online reviews.
MethodsExpression FormsRank MethodsThe Number of Websites
Yang’s method
[7]
q-ROFSProspect theorySingle
Tian’s method
[6]
HIFSORESTESingle
Liu’s method
[21]
HPFSTODIM-MULTIMOORASingle
The proposed methodPLTSPossible degree-based D–S evidence theoryMultiple websites
Table 13. Comparison results of the proposed method and existing methods in the PLTS context.
Table 13. Comparison results of the proposed method and existing methods in the PLTS context.
MethodsDecision InformationDetermination of
Attribute Weights
Attribute WeightsDecision MethodsAlternatives Ranking Orders
Du’s method
[37]
Provided by DMsGiven subjectively w ¯ ( 1 ) = ( 0.1502 , 0.1168 , 0.1550 , 0.1089 , 0.1741 , 0.1893 , 0.1057 ) T Prospect theory A 4 A 5 A 1 A 3 A 2
Li’s method
[38]
Provided by DMsMaximizing deviation w ¯ ( 1 ) = ( 0.1493 , 0.1218 , 0.1532 , 0.1151 , 0.1683 , 0.1801 , 0.1123 ) T Operator based on D–S evidence A 4 A 5 A 3 A 1 A 2
Liang’s method
[39]
Online reviewsAHP and deviation w ¯ ( 1 ) = ( 0.1349 , 0.1405 , 0.1300 , 0.1364 , 0.1582 , 0.1855 , 0.1145 ) T Fuzzy comprehensive
evaluation
A 4 A 5 A 3 A 1 A 2
The proposed methodOnline reviewsUncertainty degree and maximum deviation w ¯ ( 1 ) = ( 0.1502 , 0.1168 , 0.1550 , 0.1089 , 0.1741 , 0.1893 , 0.1057 ) T Possible degree-based D–S evidence A 4 A 5 A 3 A 1 A 2
Table 14. Pearson correlation coefficients between methods.
Table 14. Pearson correlation coefficients between methods.
Du’s Method
[37]
Li’s Method
[38]
Liang’s Method
[39]
The Proposed Method
Du’s method
[37]
1.0001.0000.9940.920
Li’s method
[38]
1.0001.0000.9950.918
Liang’s method
[39]
0.9940.9951.0000.927
The proposed method0.9200.9180.9271.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Xu, G. A Possible Degree-Based D–S Evidence Theory Method for Ranking New Energy Vehicles Based on Online Customer Reviews and Probabilistic Linguistic Term Sets. Mathematics 2025, 13, 583. https://doi.org/10.3390/math13040583

AMA Style

Zhang Y, Xu G. A Possible Degree-Based D–S Evidence Theory Method for Ranking New Energy Vehicles Based on Online Customer Reviews and Probabilistic Linguistic Term Sets. Mathematics. 2025; 13(4):583. https://doi.org/10.3390/math13040583

Chicago/Turabian Style

Zhang, Yunfei, and Gaili Xu. 2025. "A Possible Degree-Based D–S Evidence Theory Method for Ranking New Energy Vehicles Based on Online Customer Reviews and Probabilistic Linguistic Term Sets" Mathematics 13, no. 4: 583. https://doi.org/10.3390/math13040583

APA Style

Zhang, Y., & Xu, G. (2025). A Possible Degree-Based D–S Evidence Theory Method for Ranking New Energy Vehicles Based on Online Customer Reviews and Probabilistic Linguistic Term Sets. Mathematics, 13(4), 583. https://doi.org/10.3390/math13040583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop