Next Article in Journal
Variable Besov–Morrey Spaces Associated with Operators
Previous Article in Journal
Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Multi-Attribute Decision Making Method for Overvalued Star Ratings Adjustment and Its Application in New Energy Vehicle Selection

1
Institute of Big Data Intelligent Management and Decision, College of Management, Shenzhen University, Shenzhen 518060, China
2
College of Management, Shenzhen University, Shenzhen 518060, China
3
Business School, Sun Yat-sen University, Shenzhen 518107, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2037; https://doi.org/10.3390/math11092037
Submission received: 30 March 2023 / Revised: 19 April 2023 / Accepted: 24 April 2023 / Published: 25 April 2023
(This article belongs to the Section Fuzzy Sets, Systems and Decision Making)

Abstract

:
Under the global consensus of carbon peaking and carbon neutrality, new energy vehicles have gradually become mainstream, driven by the dual crises regarding the atmospheric environment and energy security. When choosing new energy vehicles, consumers prefer to browse the post-purchase reviews and star ratings of various new energy vehicles on platforms. However, it is easy for consumers to become lost in the high-star text reviews and mismatched reviews. To solve the above two issues, this study selected nine new energy vehicles and used a multi-attribute decision making method to rank the vehicles. We first designed adjustment rules based on star ratings and text reviews to cope with the issue of high star ratings but negative text reviews. Secondly, we classified consumers and recommended the optimal alternative for each type of consumer to deal with the issue of mismatched demands between review writers and viewers. Finally, this study compared the ranking results with the sales charts of the past year to verify the feasibility of the proposed method initially. The feasibility and stability of the proposed method were further verified through comparative and sensitivity analyses.

1. Introduction

With the development of urbanization and industrialization, fuel vehicles’ production and sales have been increasing yearly. Fuel vehicles have started entering millions of households, bringing convenience to people’s travel and affecting air quality [1]. It is well known that driving fuel vehicles will emit large amounts of exhaust gases, such as carbon dioxide gas and other compounds, which cause air pollution and global warming [2]. In addition, fuel vehicles are highly dependent on oil and other energy sources, and countries need to reserve or import oil and other energy sources, which is related to national energy security [3]. In contrast, new energy vehicles mainly use clean energy sources such as electricity and hydrogen, which are more friendly to the environment [2]. Therefore, driven by the dual crises regarding the atmospheric environment and energy security, new energy vehicles, represented by electric vehicles, came into being [4,5]. Under the global consensus of carbon peaking and carbon neutrality, governments encourage people to use new energy vehicles, which have gradually become mainstream [3].
Since the automobile is a highly involved and high-value product, consumers need to consider it carefully before purchasing a new energy vehicle [6]. When choosing a new energy vehicle, different consumers have different requirements. Some consumers focus on a desirable appearance, while others emphasize high performance. In addition, consumers often choose a new energy vehicle considering multiple factors that are sometimes conflicting. For example, compact vehicles are generally low in price but compact in space. By contrast, medium and large vehicles have better performance but may have a higher price. These complicate the selection process and consumers need to spend a great deal of time in decision making, which often lasts for months or even years [7]. As the Internet has developed, social media has begun to rise and hundreds of millions of users are posting comments on social media [8]. In China, auto forums such as Autohome and DCar are popular, where many consumers gather to exchange, discuss, and evaluate products, etc. Hence, there is a large amount of real user-generated content on auto forums [9]. When choosing a new energy vehicle, consumers prefer to browse the post-purchase reviews and star ratings of various new energy vehicles on platforms such as DCar. As more comments are viewed, consumers will have a better understanding of new energy vehicles, which will help them make the purchase decision. Thus, text reviews and star ratings on automotive forums are important to help consumers to assess the quality of products or services, reduce perceived risks and uncertainty, and then increase their purchase willingness [10,11].
However, a growing number of consumers are noticing that high-star ratings are far more numerous than low-star ratings. There are two main reasons for this. On the one hand, there is herd behavior when consumers give star ratings, which is easily influenced by average ratings or other consumers’ ratings [12]. On the other hand, a consumer who is dissatisfied with a product or service will give a negative text review along with a high star rating in order to avoid being harassed by the merchant [13]. However, consumers are still confused when reading a high star rating but a negative text review, and it is difficult for them to discern the reviewer’s true feelings about the products or services. In addition, with the development of social commerce, the number of text reviews has grown dramatically, and a wealth of reviews can reflect different consumers’ evaluations of products or services. Nevertheless, consumers often become lost in a large number of mismatched reviews. Consumer demands are diverse, and when the consumer and the reviewer have different demands, the reviewer’s comments may not be what the consumer wants. For example, some consumers are more concerned about performance when buying new energy vehicles, so they focus on power consumption, brakes, power, and other information when writing reviews. Others pursue appearance and focus on color, appearance, design, style, and other information when commenting. Meanwhile, some seek comfort, for family trips, and their reviews will mention family members’ feelings about the car, seats, and so on. Moreover, some of them are looking for experiential qualities, and their reviews will mention information about intelligent systems, services, etc. When performance-oriented consumers browse text reviews, they are likely to read reviews from appearance-oriented reviewers, comfort-oriented reviewers, and experience-oriented reviewers, which are not what they are looking for, and this type of consumer will easily become lost in the mass of mismatched reviews.
Therefore, this study proposed to address the following two issues.
  • How to handle user-generated content with high star ratings but negative text reviews;
  • How to identify different consumer demands and give targeted purchase suggestions.
Multi-attribute decision making refers to the ranking and selection of several alternatives by considering their performance under different attributes and using decision making methods. In research, linguistic term sets are used to denote the evaluation value of alternatives under different attributes. In 1965, Zadeh first defined the concept of fuzzy sets [14], and in 1975, Zadeh proposed the fuzzy linguistic approach to indicate linguistic information [15,16,17]. Subsequently, many linguistic term sets have been proposed one after another, including the hesitant fuzzy linguistic term set (HFLTS) [18], the interval-valued HFLTS [19], the hesitant 2-tuple fuzzy linguistic term set [20], and so on. In 2016, Pang added probability values to the hesitant fuzzy linguistic term sets and defined probabilistic linguistic term sets (PLTS) [21]. As mentioned above, consumers will consider various factors when purchasing a new energy vehicle, including space, power, power consumption, cost performance, interior, appearance, and comfort. Due to the conflict among factors, consumers often need to choose the best one among several new energy vehicles, and the selection process of new energy vehicles can be seen as a multi-attribute decision making problem, since text reviews are textual and consumers’ evaluations of a new energy vehicle are diverse in terms of certain attributes. Therefore, this research uses probabilistic linguistic term sets (PLTS) to transform text reviews into linguistic operators for subsequent decision analysis.
Based on the multi-attribute decision method, this study designed a mathematical model to select the optimal new energy vehicle. We evaluated the user-generated content on the DCar platform to test the proposed model and then used it to select an optimal new energy vehicle for consumers with different demands. The main contributions of this study are as follows. Firstly, this study has two types of data sources, namely star ratings and text reviews, which is different from previous studies that only consider a single data source. Moreover, adjustment rules are designed to modify the overrated high star ratings through the sentiments of the corresponding text comments. Secondly, this study uses the LDA model to identify consumer demands and proposes to select an optimal new energy vehicle for consumers with different demands. Thirdly, this study innovatively defines the concept of attribute richness and considers dual attribute characteristics, including richness and dissimilarity, to calculate attribute weights. This is different from previous studies that only use the entropy weight method or TF-IDF value to calculate attribute weights. Fourthly, this research conducts a cross-sensitivity analysis for two pairs of parameters, namely the star rating–text reviews parameter and the dissimilarity–richness parameter.
The remainder of this manuscript is organized as follows. Section 2 is the literature review. Section 3 is the research methodology. This study defines the aggregation method for star ratings and text reviews. Moreover, this research defines the concept of attribute richness and designs the weight calculation method and mathematical model. Section 4 presents a case study of new energy vehicle selection on DCar. Comparison analysis and sensitivity analysis are conducted in Section 5. Finally, Section 6 offers the conclusions.

2. Literature Review

2.1. New Energy Vehicles

Many studies try to identify the influencing factors that cause consumers to buy new energy vehicles. Wang and Dong surveyed consumers in four Chinese cities, Beijing, Shanghai, Tianjin, and Chongqing, and found that perceived ease of use, subjective norms, and perceived behavioral control all significantly increased consumer purchase intention, and perceived behavioral control had a significant moderating effect on subjective norms [22]. Interestingly, this study also found that perceived ease of use had a significant impact on those who would be unwilling to purchase a new energy vehicle, while subjective norms had a significant influence on potential consumers who were hesitant to purchase a new energy vehicle [22]. Ma et al. found that, on the one hand, the subsidy policy and tax reduction policy can stimulate consumers to buy new energy vehicles by reducing their purchase and use costs, and the tax reduction policy has the strongest long-term effect [23]. On the other hand, China’s purchase restriction policy and traffic restriction policy on fuel vehicles can improve consumers’ willingness to purchase new energy vehicles by regulating supply and demand in the market [23]. Significantly, it is also found that green self-identity has a significant positive effect on both personal norms and the purchase willingness regarding new energy vehicles, and the mianzi and green peer impact can positively regulate the relationship between green self-identity and purchase willingness [24]. In addition to psychological factors, subsidy policies, environmental awareness, etc., product performance also affects consumers’ purchase willingness regarding new energy vehicles. Research found that product attributes such as price, charging time, driving distance, pollutant emission, and energy consumption cost significantly affect consumer purchase intention [25].
Along with the development of social media, many studies have been conducted to analyze user-generated contents, such as star ratings and text reviews, but there are fewer studies on new energy vehicles. Cai et al. first extracted product features from new energy vehicle reviews by using machine learning models, and then used hierarchical clustering models for feature classification, followed by demand ranking based on customer satisfaction scores, and finally employed statistical methods for demand preference identification [26]. Since 2017, some studies have started to extend the multi-attribute decision making method to the new energy vehicles field. Liu et al. first established a capability–willingness–risk (C-W-R) evaluation indicator system and then proposed a multi-criteria decision making method based on the best–worst method, prospect theory, and VIKOR method to obtain the best innovative supplier for new energy vehicle manufacturers [27]. Nicolalde et al. firstly used the removal effect of a criterion method to weight the criteria; used the VIKOR, COPRAS, and TOPSIS methods to evaluate the 20 materials, and obtained the best phase change material, which was the savENRG PCM-HS22P [28]. However, these studies mainly focus on the selection of new energy vehicle suppliers, the selection of vehicle materials, and the locations of charging stations and so on. There is still a lack of research on new energy vehicle brand selection based on user-generated content.

2.2. Multi-Attribute Decision Making Method

The multi-attribute decision making method generally involves several attributes and several solutions, and the optimal alternative is selected by fusing the experience and wisdom of several experts [29,30,31]. The evaluation value of each alternative under different attributes is given by each expert, which is denoted by various linguistic term sets. Common linguistic term sets include intuitionistic fuzzy sets [32], probabilistic linguistic term sets [33], interval values [34], linguistic distribution assessments [35], and so on. There are abundant decision making methods in the field of multi-attribute decision making, such as TOPSIS [36], TODIM [37], VIKOR [38], ELECTRE [39], MULTIMOORA [40], and so on. Multi-attribute decision making methods were extended to various fields, including distribution center site selection [41], the selection of emergency solutions [42], the selection of disaster handling solutions [43], and so on. Wang et al. used the fuzzy AHP method and fuzzy deviation maximizing method to calculate attribute weights and combined five types of multi-attribute decision methods, namely the TOPSIS, TODIM, VIKOR, PROMETHEE, and ELECTRE methods, with the simple dominance principle to rank the bidding options and select the best one [44]. Dabous et al. established a utility function set based on the analytic hierarchy process and multi-attribute utility functions for large pavement network selection considering the sustainability of pavement sections [45]. Yang et al. constructed an evaluation index system for the coordinated development of regional ecology based on ecological, economic, social, and policy factors; used the closeness to construct the value function to calculate attribute weights, and finally evaluated the regional ecological development performance of 27 cities in China based on the proposed heterogeneous decision model [46]. Huang et al. used interval numbers to represent the attribute information in the group decision matrix, and then proposed a distributed interval weighted average operator to integrate qualitative data and quantitative judgments; then, they defined relevant operation rules, and finally ranked and selected the best green suppliers [34]. To solve the problem that different approximation methods for rough sets affect the results, Wang et al. first proposed an attribute metric method based on fuzzy sets, and then constructed Choquet integral operators based on the attribute metric and matching degree, and finally used the operators to rank and select the alternatives [47].

2.3. Product Ranking Method Based on Multi-Attribute Decision Making

Fan et al. divided the text-review-based product ranking process into three phases, namely product feature extraction, sentiment analysis, and product ranking, and pointed out that information fusion methods could be used to integrate the sentiment analysis results [48]. Common information fusion methods include the WA operator, OWA operator, fusion operator based on intuitionistic fuzzy numbers, fusion methods based on weighted directed graph construction, and information fusion methods based on hesitant fuzzy numbers [48]. Most of the mentioned information fusion methods belong to the field of multi-attribute decision making, and product ranking based on multi-attribute decision methods has indeed attracted the attention of many scholars. A brief description of some recent studies is shown in Table 1.
According to Table 1, in terms of data sources, most papers only used text reviews for product ranking, and a few papers only used star ratings [49,52]. It is worth noting that some of the papers’ data sources included text reviews and star ratings. However, they did not integrate these two types of data sources; instead, they used text reviews and star ratings separately. Qin et al. used star rating data to mark the polarity of reviews (positive, neutral, negative), and sentiment analysis and product ranking were only based on text reviews [59]. Bi et al. assessed price, location, overall ratings, and text reviews; calculated the prospect values for each of these four types of data, and selected the optimal hotels based on the prospect value [60]. Tayal et al. built a benchmark to evaluate the performance of ranking methods based on attribute ratings, and the product ranking method was only based on reviews [61]. In terms of applications, many studies focused on hotels, automobiles, and mobile phones. For the contribution phases, data processing, sentiment analysis, weight calculation, and product ranking all have been innovatively studied.

3. Methodology

This study proposes a new mathematical model for new energy vehicle selection based on two types of data, which are text reviews and star ratings, considering the dual characteristics of attributes. In Appendix A, Table A1 lists all acronyms used in this paper.

3.1. Preliminaries

In this section, we review the definition of probabilistic linguistic term sets, the score of PLTS calculation formula, the comparison rule of PLTS, and the distance measure formula.
Definition 1
([21]). Let  S = { s α | α = 0 , 1 , , τ }  be a linguistic term set (LTS) with asymmetric subscripts. Based on LTSs, probabilistic linguistic term sets (PLTSs) consider probability values for different linguistic terms. Thus, a PLTS can be defined as
L ( p ) = { L ( k ) ( p ( k ) ) | L ( k ) s α , p ( k ) 0 , k = 1 , 2 , , # L ( p ) , k = 1 # L ( p ) p ( k ) 1 }
where  L ( k ) ( p ( k ) )  refers to the k-th linguistic term  L ( k )  with the probability value  p ( k ) . As the elements in  s α , the linguistic terms  L ( k ) , k = 1 , 2 , , # L ( p )  are arranged in ascending order and  # L ( p )  denotes the number of elements in  L ( p ) . The probability value  p ( k )  must be greater than or equal to 0, and the sum of all probability values of a PLTS must be less than or equal to 1. When the sum of the probability values is less than 1, the evaluation information is incomplete. Then, it requires normalization to make the sum of the probability values equal to 1. The normalized probability value is  p ¯ ( k ) , where  p ¯ ( k ) = p ( k ) / k = 1 # L ( p ) p ( k ) .
Definition 2
([21]). Given a PLTS  L ( p ) = { L ( k ) ( p ( k ) ) | k = 1 , 2 , , # L ( p ) } , the score of  L ( p ) , also called the expected value, can be calculated as
E ( L ( p ) ) = s α ¯ , α ¯ = k = 1 # L ( p ) r ( k ) p ( k ) k = 1 # L ( p ) p ( k )
where  # L ( p )  denotes the number of elements in  L ( p )  ,  r ( k )  is the subscript of the linguistic term  L ( k )  , and  p ( k )  is the probability value of the linguistic term  L ( k )  . The deviation degree of  L ( p )  is defined as 
σ ( L ( p ) ) = k = 1 # L ( p ) ( p ( k ) ( r ( k ) α ¯ ) ) 2 k = 1 # L ( p ) p ( k )
For any two PLTSs,  L 1 ( p ) = { L 1 ( k 1 ) ( p 1 ( k 1 ) ) | k 1 = 1 , 2 , , # L 1 ( p ) }  and  L 2 ( p ) = { L 2 ( k 2 ) ( p 2 ( k 2 ) ) | k 2 = 1 , 2 , , # L 2 ( p ) } , the comparison rules are defined as follows:
(1) 
If  E ( L 1 ( p ) ) > E ( L 2 ( p ) ) , then  L 1 ( p ) L 2 ( p ) ;
(2) 
If  E ( L 1 ( p ) ) < E ( L 2 ( p ) ) , then  L 1 ( p ) L 2 ( p ) ;
(3) 
When  E ( L 1 ( p ) ) = E ( L 2 ( p ) ) , if  σ ( L 1 ( p ) ) < σ ( L 2 ( p ) ) , then  L 1 ( p ) L 2 ( p ) ; if  σ ( L 1 ( p ) ) > σ ( L 2 ( p ) ) , then  L 1 ( p ) L 2 ( p ) ; if  σ ( L 1 ( p ) ) = σ ( L 2 ( p ) ) , then  L 1 ( p ) L 2 ( p ) .
Definition 3
([21]). For any two PLTSs,  L 1 ( p ) = { L 1 ( k 1 ) ( p 1 ( k 1 ) ) | k 1 = 1 , 2 , , # L 1 ( p ) }  and  L 2 ( p ) = { L 2 ( k 2 ) ( p 2 ( k 2 ) ) | k 2 = 1 , 2 , , # L 2 ( p ) } , if  # L 1 ( p ) < # L 2 ( p ) , then  L 1 ( p )  requires the use of additional linguistic terms with probability values of 0 until  # L 1 ( p ) = # L 2 ( p ) . If  # L 1 ( p ) > # L 2 ( p ) , then  L 2 ( p )  requires the use of additional linguistic terms with probability values of 0 until  # L 1 ( p ) = # L 2 ( p ) . If  # L 1 ( p ) = # L 2 ( p ) , then the distance between these two PLTSs can be calculated as
d ( L 1 ( p ) , L 2 ( p ) ) = k = 1 # L 1 ( p ) ( p 1 ( k ) r 1 ( k ) p 2 ( k ) r 2 ( k ) ) 2 / # L 1 ( p )
where  r 1 ( k )  and  r 2 ( k )  denote the subscripts of linguistic terms  L 1 ( p )  and  L 2 ( p )  , respectively. Meanwhile,  p 1 ( k )  and  p 2 ( k )  denote the probability values of linguistic terms  L 1 ( p )   and   L 2 ( p ) , respectively.

3.2. Problem Description

In this study, the selection of new energy vehicles is considered as a multi-attribute decision problem, and a new method is proposed to select an optimal new energy vehicle from several new energy vehicles. Suppose that there are n new energy vehicles, also called alternatives, and m attributes of new energy vehicles. Let A = { a 1 , a 2 , , a n } be an alternative set and let C = { c 1 , c 2 , , c m } be an attribute set. The weights of these attributes are denoted by W = { w 1 , w 2 , , w m } , where 0 w h 1 , h = 1 m w h = 1 . In addition, in the first stage, this study used the LDA model to classify the reviews under each alternative into topics. Thus, let T = { t 1 , t 2 , , t # t o p i c } be a topic set, where # t o p i c denotes the total number of topics. Notably, this study will distinguish whether it is based on star ratings or text reviews through a symbol in the right superscript. The symbol “sr” in the right superscript denotes star ratings, while “tr” refers to text reviews.
Let a seven-level LTS S = { s α | α = 0 , 1 , , 6 } denote the evaluation values obtained by text reviews. Probabilistic linguistic evaluation values for vehicle a j with respect to attribute c i and the topic t v are provided by consumers, where i = 1 , 2 , , m ; j = 1 , 2 , , n ; v = 1 , 2 , , # t o p i c . Probabilistic linguistic evaluation values ( L i j v k ) t r with the probability ( p i j v k ) t r are included in the PLTS L i j v t r ( p ) = { ( L i j v k ) t r ( ( p i j v k ) t r ) | k = 1 , 2 , , # L i j v t r } , where ( p i j v k ) t r > 0 , k = 1 , 2 , . , # L i j v t r , and k = 1 # L i j v t r ( p i j v k ) t r = 1 . Then, the sub-topic text review decision matrices R v t r = [ L i j v t r ( p ) ] n × m and the text review decision matrix R t r = [ L i j t r ( p ) ] n × m are obtained.
Consumers on DCar can rate the attributes of new energy vehicles on a scale of 1 to 5. Therefore, let a five-level LTS S = { s α | α = 0 , 1 , , 4 } denote attribute ratings. Probabilistic linguistic evaluation values ( L i j v k ) s r with the probability ( p i j v k ) s r are included in the PLTS L i j v s r ( p ) = { ( L i j v k ) s r ( ( p i j v k ) s r ) | k = 1 , 2 , , # L i j v s r } , where ( p i j v k ) s r > 0 , k = 1 , 2 , . , # L i j v s r , and k = 1 # L i j v s r ( p i j v k ) s r = 1 . Then, the sub-topic star rating decision matrices R v s r = [ L i j v s r ( p ) ] n × m and the star rating decision matrix R s r = [ L i j s r ( p ) ] n × m are obtained.

3.3. Mathematical Model

The proposed methodology consists of four stages, as shown in Figure 1. The first stage is the formation of the star rating decision matrix R s r = [ L i j s r ( p ) ] n × m and the text review decision matrix R t r = [ L i j t r ( p ) ] n × m . The second stage is the formation of the comprehensive decision matrix R = [ L i j ( p ) ] n × m . The third stage is weight calculation. The fourth stage is alternative ranking and selection.

3.3.1. Obtain the Star Rating Decision Matrix and the Text Review Decision Matrix

Step 1. Data acquisition and pre-processing.
User-generated data on automotive forums generally include text comments and star ratings of attributes. We use the Octopus collector to obtain online comments and star ratings from relevant automotive forums and then carry out data pre-processing. First of all, we carry out data cleaning, removing incomplete data and duplicate data. Secondly, we use the Jieba library to segment text reviews and remove useless words, stop words, punctuation marks, etc. Thirdly, we perform the POS tagging. Fourthly, we extract key new energy vehicle attributes based on TF-IDF values.
Step 2. Aspect-based sentiment analysis.
We compare the attributes extracted based on reviews with those in star ratings and determine the attribute set mainly based on the latter. Then, we use the pysenti library to conduct the dictionary-based sentiment analysis. Firstly, we build several pairs of attribute–sentiment words as a seed table. Secondly, we search the attributes of the seed table in the dataset and obtain words or phrases associated with each attribute. Thirdly, we identify the sentiment polarity (positive, neutral, negative) of each word or phrase according to the sentiment dictionary. Fourthly, we assign weights to the sentiment polarity of multiple sentiment words under each attribute in combination with the sentence structure, and then the weighted sum method is used to obtain the sentiment polarity score for each attribute.
Step 3. Convert sentiment scores into seven-level probabilistic linguistic terms.
Based on the equidistant binning method, the sentiment scores of the attributes are divided into seven segments equidistantly and then converted into the corresponding seven-level probabilistic linguistic terms.
Step 4. Convert attribute ratings into five-level probabilistic linguistic terms.
The attribute ratings were obtained in Step 1, ranging from 1 to 5, and thus can be directly converted into five-level probabilistic linguistic terms. Specifically, five ratings, namely excellent, good, average, bad, and terrible, can be replaced by the linguistic terms s 4 , s 3 , s 2 , s 1 , and s 0 .
Step 5. Topic classification.
Text reviews are often one-sided and the formation process will be influenced by consumer demands. For example, consumers who pursue appearance will also focus on the description of appearance when commenting. In order to match the demands of comment writers and comment viewers, it is necessary to firstly identify the diverse demands of consumers. This study used the LDA model to classify consumers. The LDA model is used to classify all the comments after pre-processing into topics and the optimal number of topics is selected on the basis of perplexity. It is worth noting that the number of topics under each alternative is the same, but the percentage of each topic may be different.
Step 6. Obtain the sub-topic star rating decision matrices and the sub-topic text review decision matrices.
We calculate the proportion of each linguistic term for each alternative under different sub-topics and attributes, and thus form the sub-topic text review decision matrices R v t r = [ L i j v t r ( p ) ] n × m and the sub-topic star rating decision matrices R v s r = [ L i j v s r ( p ) ] n × m .
Step 7. Obtain the star rating decision matrix and the text review decision matrix.
The sub-topic decision matrices under each alternative are averaged separately and integrated into a comprehensive decision matrix R = [ L i j ( p ) ] n × m . The aggregation formula is as follows.
p i j k = v = 0 # t o p i c - 1 p i j v k # t o p i c
where # t o p i c denotes the number of topics and p i j v k is the probability value of the k-th linguistic term for alternative a j under the attribute c i and the topic t v . Notably, for sub-topic decision matrixes of text reviews, k = { 0 , 1 , , 6 } , while, for sub-topic decision matrixes of star ratings, k = { 0 , 1 , , 4 } . Finally, normalization processing is as follows.
p i j k = p i j k k = 0 # L i j p i j k

3.3.2. Obtain the Comprehensive Decision Matrix

Since 1974, many studies have revealed that the human thinking process is influenced by “two systems”, which are System 1 and System 2 [62,63]. System 1 is biased fast thinking, while System 2 is rational slow thinking. Specifically, System 1 is intuitive, automatic, and associative, operating unconsciously and quickly, without much mental effort [62,63]. In contrast, System 2 requires a lot of mental effort and concentration and is controlled, disciplined, and focused on the outcome of the decision [62,63].
When consumers evaluate products, they are often first required to rate the product attributes and are further asked to make a comment. The assignment of star ratings involves fast thinking, related to intuition, association, experience, and other thinking in System 1. Thus, star ratings are likely to be vague and biased and do not accurately reflect consumers’ evaluations of products or services. Cho et al. argued that herd behavior is probably occurring when consumers give star ratings because they can see the average star rating, which may influence their own rating values [64]. Some experiments have also shown that consumers tend to modify their own rating values after seeing the ratings given by others, with 70% of the final star ratings derived from their own initial estimates and 30% from others’ estimates [64]. Considering the fuzziness and bias of star ratings, many consumers are skeptical of high star ratings. Gavilan et al. found that web users prefer to trust low-value ratings rather than high-value ratings and that consumers’ trust in high ratings depends on the number of reviews [65]. Hong et al. mentioned that consumers do not believe in high star ratings unless the review content is also positive [66]. Cho et al. argued that the positive sentiment of text reviews would compensate for consumers’ tendency to discount high star ratings, but the negative sentiment of text reviews would aggravate consumers’ suspicion of high ratings [64]. Based on the above analysis, we cannot rank products only based on star ratings because star rating values cannot accurately reflect consumers’ evaluations of products.
The formation of text comments involves rational, slow thinking, related to the controlled, thoughtful, logical, and other thinking in System 2. When composing text reviews, consumers spend time thinking, forming words, and then logically writing them down. However, writing reviews is burdensome for consumers, not providing intuitive benefits but taking up time. Therefore, consumers are reluctant to spend a lot of effort in reviewing. Text reviews are generally not too long, and the content of comments is one-sided, but they can be used as a supplement to star ratings. It has also been shown that text reviews have additional information that significantly influences product demand [64].
Based on the above analysis, this study is mainly based on star ratings, complemented by text reviews, to obtain consumers’ evaluation values of products. Specifically, this study argues that star ratings are often overestimated and adjusts star ratings with the sentiment scores of text comments. This study argues that there are four states of linguistic terms, as shown in Figure 2. Firstly, this study excludes the case of low-star but positive text reviews, which is also rare. Secondly, linguistic terms are acceptable in the case of high-star and positive text reviews or low-star and negative text reviews, which do not require adjustment. Finally, those linguistic terms for high-star but negative text reviews need to be adjusted. In other words, when the text content of the high star rating is negative, the high star rating needs to be down-adjusted. The adjustment rules are shown in Table 2.
In Table 2, the first column on the left is the probabilistic linguistic terms of star ratings, the first row above is the probabilistic linguistic terms for the sentiment scores of the text reviews, and the rest are adjusted probabilistic linguistic terms. The general rule is to adjust the star ratings through the sentiments of corresponding text reviews. This study only considers the situation of high-star and negative text reviews, and ignores the situation of low-star and positive text comments. Thus, the adjustment direction is downward, and the adjustment range is [0,1], with the same unit adjustment for each row. When the difference between the star rating and the sentiment of the text comment is larger, the adjustment of the star rating is larger; otherwise, the adjustment is smaller or even zero. For example, when the star rating is s 4 , the probabilistic linguistic terms for the sentiment scores of text comments might be { s 0 , s 1 , s 2 , s 3 , s 4 , s 5 , s 6 } . Meanwhile, when the review is s 6 , the high star rating s 4 is acceptable, and adjustment is not required. However, when the review is s 5 , s 4 , s 3 , s 2 , s 1 , or s 0 , the star rating s 4 may be overrated and needs to be modified with increasing amounts of adjustment. Similarly, when the probabilistic linguistic term of the review is greater than or equal to s 4 , the star rating s 3 is credible. However, if the review is s 3 , the star rating s 3 is to be adjusted to s 2.75 . Meanwhile, if the review is s 2 , s 1 , s 0 , the star rating s 3 is to be adjusted to s 2.5 , s 2.25 , s 2 , respectively. Notably, this study does not consider situations where star ratings are underrated. Thus, when the star rating is s 0 , the rating value will not be adjusted upward, even if the comment is positive.
Steps to obtain the comprehensive decision matrix are as follows.
Input. Step 3, step 4, and step 5 in Section 3.3.1.
Step 1. Obtain consumers’ evaluation values.
Based on the adjustment rule designed in this study, the probabilistic linguistic terms of the star ratings are adjusted and the final consumer evaluation values are obtained.
Step 2. Obtain the sub-topic comprehensive decision matrices.
Calculate the proportion of each linguistic term for each alternative under different sub-topics and attributes, and thus form the sub-topic comprehensive decision matrices.
Step 3. Obtain the comprehensive decision matrix.
The average of the sub-topics’ decision matrices under each alternative is separately calculated using Equation (5), and then normalized by using Equation (6) and integrated into a comprehensive decision matrix.
Output. A comprehensive probabilistic linguistic decision matrix.

3.3.3. Weight Calculation of Attributes

This study first defined the concept of attribute richness and then calculated the weights based on richness and dissimilarity. Since this study contained dual data sources, the richness (RI) and dissimilarity (DI) were calculated based on the star rating decision matrix and the text review decision matrix, respectively. Next, the symbol ‘ri’ will be used to refer to richness, ‘di’ to refer to dissimilarity, and ‘sr’ to denote star ratings and ‘tr’ to denote text reviews.
(1)
Calculating the richness of attributes
When the distribution of the evaluation value of an attribute is concentrated, it means that many customers have the same evaluation of this attribute, which contains less information. Therefore, this attribute weight should be decreased. On the contrary, when the distribution of the evaluation value of an attribute is balanced, it means that consumers have a rich and diverse evaluation of the attribute, which can provide more information. In this case, this attribute has a higher richness degree. Thus, this attribute weight should be increased.
The following two probabilistic linguistic term sets are used to illustrate attribute richness.
  • Let a PLTS L 1 ( p ) = { s 2 ( 0.07 ) , s 3 ( 0.60 ) , s 4 ( 0.23 ) , s 5 ( 0.04 ) , s 6 ( 0.06 ) } express the consumers’ evaluation value of attribute 1. It shows that 60% of consumers believe that attribute 1 is average, 23% claim that it is good, and fewer consumers believe that attribute 1 is bad, very good, or excellent.
  • Let a PLTS L 2 ( p ) = { s 2 ( 0.27 ) , s 3 ( 0.20 ) , s 4 ( 0.23 ) , s 5 ( 0.14 ) , s 6 ( 0.16 ) } express the consumers’ evaluation value of attribute 2. It shows that 27% of consumers believe that attribute 2 is bad, and 23% claim that it is good. Moreover, 20%, 14%, and 16% of consumers separately believe that attribute 2 is average, very good, and excellent.
Among the two attributes above, the consumers’ evaluation of attribute 1 focuses on average and good, while the consumers’ evaluation of attribute 2 is diverse, the probability values of different linguistic terms are closer, and the evaluation value for attribute 2 contains richer information. Attribute 2 should be assigned a larger weight because different consumers have rich and diverse feelings about attribute 2, which brings a greater influence on the selection of products.
Definition 4.
Let  L ( p ) = { L ( k ) ( p ( k ) ) | k = 1 , 2 , , # L ( p ) }  be a probabilistic linguistic term set,  p ( k ) > 0 , k = 1 # L ( p ) p ( k ) = 1 . Let  L e ( p ) = { L ( k ) ( p e ( k ) ) | k = 1 , 2 , , # L ( p ) }  be the ideal equilibrium PLTS of  L ( p ) ,  p e ( 1 ) = p e ( 2 ) = = p e ( # L ( p ) ) = p e  and  p e ( k ) > 0 , k = 1 # L ( p ) p e ( k ) = 1 . The distance measure between  L ( p )  and  L e ( p )  can be defined as follows:
d ( L ( p ) , L e ( p ) ) = k = 1 # L ( p ) ( r ( k ) ( p ( k ) - p e ) ) 2 / # L ( p )
where  r ( k )  denotes the subscript of linguistic term  L ( k ) , and  p ( k )  denotes the probability value of linguistic term  L ( k ) . # L ( p )  is the number of linguistic terms in  L ( p )  or  L e ( p ) . Meanwhile,  p e  is a fixed value, referring to the probability value for all linguistic terms in  L e ( p )  and  p e = 1 / # L ( p ) .
This study calculates the attribute richness by calculating the distance between the evaluation value and its ideal equilibrium evaluation value. The steps to calculate the richness are as follows.
Step 1. Calculate the distance d r i i j s r between the evaluation value of alternative a j under attribute c i and its ideal equilibrium evaluation value according to Equation (7) (based on the star rating decision matrix).
Step 2. Calculate the average distance d r i i s r between attribute c i and the ideal equilibrium evaluation value (based on the star rating decision matrix).
d r i i s r = j = 1 n d r i i j s r / n
Step 3. Calculate the richness R i i s r (based on the star rating decision matrix).
R i i s r = 2 - d r i i s r i = 1 m d r i i s r
The smaller the average distance between the evaluation value and the ideal equilibrium solution, the higher the richness and the greater the weight.
Step 4. Calculate the distance d r i i j t r between the evaluation value of alternative a j under attribute c i and its ideal equilibrium evaluation value according to Equation (7) (based on the text review decision matrix).
Step 5. Calculate the average distance d r i i t r between attribute c i and the ideal equilibrium evaluation value (based on the text review decision matrix).
d r i i t r = j = 1 n d r i i j t r / n
Step 6. Calculate the richness R i i t r (based on the text review decision matrix).
R i i t r = 1 - d r i i t r i = 1 m d r i i t r
The smaller the distance between the evaluation value and the ideal equilibrium solution, the higher the richness and the greater the weight.
Step 7. Calculate the comprehensive richness R i i (based on the star rating decision matrix and the text review decision matrix).
R i i = α 1 × R i i s r + α 2 × R i i t r
where α 1 and α 2 are a pair of parameters that represent the ratio of star ratings to text reviews and α 1 + α 2 = 1 .
(2)
Calculating the dissimilarity of attributes
When the distance between the evaluation values under two attributes is larger, the similarity between these two attributes is lower and the dissimilarity is higher. Based on the clustering theory, when the average distance between an attribute and other attributes is larger, the attribute will have a greater effect on the results, and this attribute should be assigned a larger weight. Otherwise, it should be assigned less weight. The steps for calculating the dissimilarity are as follows.
Step 1. Calculate the distance d d i v i j s r between the evaluation values of attribute c v and attribute c i under alternative a j according to Equation (4) (based on the star rating decision matrix).
Step 2. Calculate the average distance d d i v i s r between attribute c v and attribute c i (based on the star rating decision matrix).
d d i v i s r = j = 1 n d d i v i j s r / n
Step 3. Calculate the dissimilarity D i v s r (based on the star rating decision matrix).
D i v s r = i = 1 m d d i v i s r / m
Step 4. Obtain the normalized dissimilarity D i i s r (based on the star rating decision matrix).
D i i s r = D i i s r max { D i i s r | i { 1 , 2 , , m } }
Step 5. Calculate the distance d d i v i j t r between the evaluation values of attribute c v and attribute c i under alternative a j according to Equation (4) (based on the text review decision matrix).
Step 6. Calculate the average distance d d i v i t r between attribute c v and attribute c i (based on the text review decision matrix).
d d i v i t r = j = 1 n d d i v i j t r / n
Step 7. Calculate the dissimilarity D i v t r (based on the text review decision matrix).
D i v t r = i = 1 m d d i v i t r / m
Step 8. Obtain the normalized dissimilarity D i i t r (based on the text review decision matrix).
D i i t r = D i i t r max { D i i t r | i { 1 , 2 , , m } }
Step 9. Calculate the comprehensive dissimilarity D i i (based on the star rating decision matrix and the text review decision matrix).
D i i = α 1 × D i i s r + α 2 × D i i t r
where α 1 and α 2 are a pair of parameters that are the same as the parameters in Equation (12) and α 1 + α 2 = 1 .
As we obtained the richness and dissimilarity of each attribute in the previous part, the attribute weights based on richness and dissimilarity can be calculated as follows.
w i = β 1 × R i i + β 2 × D i i
where β 1 and β 2 are a pair of parameters that represent the ratio of richness to dissimilarity and β 1 + β 2 = 1 . When the richness of an attribute is larger and the dissimilarity is greater, the weight of this attribute is greater.

3.3.4. Alternative Ranking and Selection

Since this study integrates the star rating decision matrix and the text review decision matrix into a comprehensive decision matrix with 15 linguistic terms, we choose the TOPSIS method [67], which is easier to perform, for the ranking and selection of the best alternative with the following steps.
Step 1. Calculate the score value s i j of alternative a j under attribute c i .
s i j = k = 1 # L i j r i j k × p i j k
Step 2. Obtain the normalized score value y i j .
y i j = s i j j = 1 n s i j 2
Step 3. Obtain the weighted score value z i j .
z i j = y i j × w i
Step 4. Calculate the maximum and minimum values.
Let P I S = { z 1 + , z 2 + , , z m + } be the set of maximum values, where z i + = max z i j , j { 1 , 2 , , n } , and let N I S = { z 1 - , z 2 - , , z m - } be the set of minimum values, where z i - = min z i j , j { 1 , 2 , , n } .
Step 5. Calculate the distance D j + and D j - from the ideal positive solution and from the ideal negative solution, respectively.
D j + = i = 1 m ( z i j - z i + ) 2
D j - = i = 1 m ( z i j - z i - ) 2
Step 6. Calculate the nearness degree R C j of alternative a j .
R C j = D j D j + + D j
Rank the alternatives in descending order of nearness degree, and the one with the greatest nearness degree is the optimal alternative.

4. Case Study: New Energy Vehicle Selection in DCar.com

DCar.com is a car information and service platform that categorizes cars into new energy cars, SUVs, sports cars, and other types, providing information on cars including price, parameter configuration, size, sales, word of mouth, and more. In the Word-of-Mouth section, numerous car buyers rate the attributes of cars that they have purchased on a scale of 1 to 5, and then upload pictures of the car and post reviews of their experiences with the car. Users of the DCar website can browse through the attribute ratings and reviews of cars to obtain information about the cars and make better purchasing decisions. Therefore, ranking cars based on attribute ratings and text reviews on DCar.com is a worthy concern.

4.1. Data Source and Data Processing

Based on the sales rankings of new energy vehicles on DCar.com for the past year, and taking into account the diversity of brands and sizes of vehicles, this study selected the top five compact new energy vehicles (CNEV) in terms of sales, namely car A, car B, car C, car D, and car E, denoted hereafter by a 1 , a 2 , a 3 , a 4 , and a 5 . Moreover, the top four medium and large new energy vehicles (MLNEV) in terms of sales were also selected, namely car F, car G, car H, and car I, and these vehicles are denoted hereafter by a 6 , a 7 , a 8 , and a 9 . This study obtained star ratings and text reviews for the above nine cars on DCar.com and the number of ratings/reviews for each vehicle is shown in Table 3.
When purchasing cars, consumers may have different demands, which will be further reflected in their own attribute ratings and text reviews. In order to avoid averaging consumer opinions, this study first classified consumers and then constructed decision matrices for different consumer types under each vehicle. Specifically, after pre-processing the text reviews, this study used the LDA algorithm to classify consumers for compact and medium–large vehicles, respectively. This study first determined topic numbers based on the perplexity metric, and then artificially named consumer types through topic high-frequency words, as shown in Table 4. Finally, there are three types of consumers for CNEV, namely family-oriented consumers, appearance-oriented consumers, and professional-oriented consumers. The three consumer types are expressed as t 1 , t 2 , and t 3 . Moreover, there are four types of consumers for MLNEV, namely experiential-oriented consumers, appearance-oriented consumers, professional-oriented consumers, and performance-oriented consumers. The four consumer types are expressed as t 4 , t 1 , t 2 , and t 5 . Table 5 and Table 6 show the number of ratings/reviews corresponding to the different consumer types for each vehicle.
In terms of star rating processing, this study obtained attribute ratings for the above nine cars on DCar.com, where the attributes include space, power, handling, power consumption, comfort, appearance, interior, and cost performance, and these attributes are denoted hereafter by c 1 , c 2 , c 3 , c 4 , c 5 , c 6 , c 7 , and c 8 . Attributes are rated on a scale of 1 to 5, indicating terrible, bad, average, good, and excellent. This list is replaced hereafter by the linguistic terms s 0 , s 1 , s 2 , s 3 , and s 4 . In terms of text review processing, this study divided the sentiment scores of text reviews into seven levels, namely terrible, very bad, bad, average, good, very good, and excellent. This list is replaced hereafter by the linguistic terms s 0 , s 1 , s 2 , s 3 , s 4 , s 5 , and s 6 . Finally, the overall star rating decision matrix and the text review decision matrix were obtained according to steps 1 to 7 in Section 3.3.1, as shown in Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9.

4.2. Obtaining the Comprehensive Decision Matrix

This study argues that star ratings are often overestimated and thus designs adjustment rules. The adjustment direction can only be downwards. When the text content of a high star rating is negative, the high star rating needs to be adjusted downwards. According to steps 1 to 3 in Section 3.3.2, this study obtained the comprehensive probabilistic linguistic decision matrix shown in Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15.

4.3. Calculating the Weights of Attributes

This study divided new energy vehicles into two categories, compact (CNEV) and medium–large (MLNEV), and argued that consumers have different priorities for these two types of vehicle attributes. For example, consumer demand for space should be lower for compact than for medium–large vehicles, because compact vehicles are inherently small in space. Therefore, this study calculated attribute weights for compact new energy vehicles (including a 1 , a 2 , a 3 , a 4 , a 5 ) and medium–large new energy vehicles (including a 6 , a 7 , a 8 , a 9 ) separately.
This study introduced the concept of attribute richness and calculated attribute weights based on richness and existing dissimilarity. Since this study used two data sources, star ratings and text reviews, the richness and dissimilarity were calculated based on the star rating comprehensive decision matrix and the text review comprehensive decision matrix, respectively. The richness and dissimilarity of each attribute were obtained according to the steps in Section 3.3.3 and are shown in Table 16.
In order to calculate the attribute weights, this study needs to determine the parameters ( α 1 , α 2 , β 1 , β 2 ). Existing research supports that there is herding behavior when consumers give star ratings, which is easily influenced by others’ star ratings and the average star rating. This leads to star ratings commonly being overestimated and untruthful, while text reviews are more trustworthy. Consequently, this study let the star rating parameter α 1 be 0.4 and the text review parameter α 2 be 0.6. Moreover, we let the richness parameter β 1 be 0.5 and the dissimilarity parameter β 2 be 0.5. Table 17 shows the attribute weights for compact and medium–large new energy vehicles.
According to the prioritizations between the attributes in Table 18, for compact new energy vehicles, consumers place an emphasis on comfort, interior, and appearance, while, for medium and large new energy vehicles, consumers value power consumption, space, and handling. Moreover, for compact and medium–large new energy vehicles, power and cost performance are given the least attention.

4.4. Results and New Energy Vehicle Recommendations

Based on the TOPSIS method, the nearness degree of each alternative was calculated according to the steps in Section 3.3.4. Ranking the alternatives based on the nearness degree, from largest to smallest, this study obtained the ranking results shown in Table 19. This is the same as the sales ranking in the past year, proving the feasibility of the proposed method. Car A is recommended when consumers want to purchase a compact new energy vehicle, and car F is recommended when purchasing a medium or large new energy vehicle.
This study divided consumers who buy compact new energy vehicles into three categories and those who buy medium and large vehicles into four categories. Different types of consumers have different consumer needs, and each new energy vehicle has different superior and inferior attributes. It is not reasonable to recommend new energy vehicles without considering the heterogeneity of consumer needs. Therefore, based on the previous consumer classification, this study intends to recommend new energy vehicles for different consumer demands. We input the sub-topic comprehensive decision matrices obtained in step 2 of Section 3.3.2 and re-rank the alternatives based on the steps in Section 3.3.4. The results are shown in Table 20. In terms of CNEV, for family-oriented consumers, car B and car E are recommended. Meanwhile, for appearance-oriented consumers, car A and car B are favored. Next, for professional-oriented consumers, car A and car D are preferred. In terms of MLNEV, for family-oriented consumers, car H and car F are recommended. For other types of consumers, car F and car G are favored.

5. Comparative and Sensitivity Analysis

5.1. Comparative Analysis

This study proposes a new mathematical method to rank new energy vehicles. The results are consistent with the sales ranking of new energy vehicles on DCar.com in the past year, which initially verifies the feasibility of the proposed method. To further test the effectiveness of our method, this section will describe comparative analyses in three aspects: multi-attribute decision making method, weight, and data source.
In terms of multi-attribute decision making methods, this study designs adjustment rules based on the star rating decision matrix and the text review decision matrix. The adjusted comprehensive decision matrix contains at most 15 elements in probabilistic linguistic term sets, which greatly increases the difficulty of ranking the alternatives. This study ranks the alternatives using the TOPSIS method in the case study, and the best options are car A and car F. TODIM, PROMETHEE, and VIKOR are common multi-attribute decision making methods but are more complex to compute and unsuitable for probabilistic linguistic term sets with too many elements. However, the score function is a simpler method for ranking alternatives. We assigned the same weights to the attributes as in Section 4.3, first calculating each attribute’s score values separately, and then calculating the weighted score values for each alternative. The results are shown in Table 21. It is clear that the score function and the TOPSIS method have the same calculation results.
In terms of weighting, the term frequency-inverse document frequency (TF-IDF) is often used to determine the weight of each attribute [68,69]. This study obtained the TF-IDF values of the attributes in step 1 of Section 3.3.1 and normalized the TF-IDF values to obtain the attribute weights as shown in Table 22. Moreover, this study used the TOPSIS method to calculate the nearness degree of the alternatives and obtained ranking results of a 1 a 3 a 4 a 5 a 2 and a 6 a 7 a 9 a 8 . The optimal compact new energy vehicle (CNEV) is car A and the optimal medium and large vehicle is car F, which is the same as the optimal solutions obtained in this study. However, alternative 2 was ranked last among the CNEVs, and the rankings of alternative 8 and alternative 9 among the MLNEVs were reversed. This illustrates the limitations in calculating weights only based on TF-IDF values.
In terms of data sources, this study argued that star ratings are often overestimated and text reviews are one-sided. Thus, we integrated these two types of data sources. However, most papers only used text reviews for product ranking [53,54,55,70], and a few papers only used star ratings [49,52]. As shown in Table 23, TR indicates alternative selection only based on the text review decision matrix and SR indicates alternative selection only based on the star rating decision matrix. Three multi-attribute decision methods were used, namely the TOPSIS, Dempster–Shafer evidence theory (DSET), and TODIM methods.
Comparing methods 1–5 with the method proposed in this study, it was found that there were significant limitations when ranking the options only based on text reviews, with large deviations in the ranking results. The recommended compact cars were alternatives 4 and 5, which should have been ranked lower. The recommended mid-size cars were option 6 and option 9. Instead, it is more credible to rank the alternatives only based on star ratings. The best compact car is alternative 1, and that for medium and large cars is alternative 6, which is similar to the results obtained in this study. However, there is a reversal in the ranking of some alternatives, such as alternative 4 and alternative 5, and alternative 7 and alternative 8. The possible reasons for this are as follows. A consumer first gives a star rating and then writes a complementary text review. Thus, the content in the text reviews is related to the star rating level. When the star rating is high, the consumer tends to add unsatisfactory aspects to the text review. When the star rating is low, the consumer may add satisfactory remarks to the text review or make a mild complaint in order to avoid harassment from the merchant. Consequently, text reviews for lower star ratings are likely to be mildly positive, while text reviews for higher star ratings are likely to be mildly negative. In this case, methods 1–3 all rank alternatives only based on text comments, which have a large deviation. In addition, star ratings are easily overestimated and need to be adjusted by the sentiments of the text reviews. Otherwise, there is a small deviation in the ranking results only based on star ratings, as shown in the results of methods 4 and 5.

5.2. Sensitivity Analysis

Parameters α 1 , α 2 , β 1 , β 2 are, respectively, defined as different values to illustrate their influences on the ranking results. The ranking results for CNEVs are shown in Figure 3 and Figure 4 and the ranking results for MLNEVs are shown in Figure 5 and Figure 6.
According to Figure 3 and Figure 4, the best alternative is always alternative 1. In the former four scenarios, the CNEV ranking results based on different values of parameters are consistent, sorted by a 1 a 2 a 3 a 4 a 5 . However, when α 1 = 0.1 ,   α 2 = 0.9 and β 1 = 0.9 ,   β 2 = 0.1 , the nearness degree of alternative 3 is greater than the nearness degree of alternative 2, and the ranking result is a 1 a 3 a 2 a 4 a 5 . Moreover, when α 1 = 0.4 ,   α 2 = 0.6 and β 1 = 0.9 ,   β 2 = 0.1 , the nearness degree of alternative 4 is greater than the nearness degree of alternative 3, and the ranking result is a 1 a 2 a 4 a 3 a 5 . In the latter five scenarios, when α 1 = 0.5 ,   α 2 = 0.5 and β 1 = 0.5 ,   β 2 = 0.5 , the ranking result starts to change to a 1 a 2 a 4 a 3 a 5 . Moreover, as the star rating α 1 and text review α 2 change, the difference between the nearness degrees of alternative 3 and alternative 4 grows larger. Therefore, parameters α 1 and α 2 have a greater effect on the ranking results, but parameters β 1 and β 2 have less influence on the sorting results.
According to Figure 5 and Figure 6, the MLNEV ranking results based on different values of parameters are consistent, sorted by a 6 a 7 a 8 a 9 , and alternative 6 is always the best in MLNEVs. However, as the star rating α 1 and text review α 2 change, the nearness degrees of alternative 7 and alternative 8 are close. When α 1 = 0.9 ,   α 2 = 0.1 and β 1 = 0.8 ,   β 2 = 0.2 , the nearness degree of alternative 8 starts to be greater than the nearness degree of alternative 7, and the ranking result is a 6 a 8 a 7 a 9 . The richness β 1 and dissimilarity β 2 slightly influenced the nearness degree in the former three scenarios in Figure 5 but did not affect the ranking results. Further, the richness β 1 and dissimilarity β 2 hardly affect the nearness degree in the latter six scenarios in Figure 5.

6. Conclusions

New energy vehicles have become popular, driven by the dual crises regarding the atmospheric environment and energy security. Since the automobile is a highly involved and high-value product, consumers tend to gather information through online reviews, which can assist in purchase decisions. This study takes the above new energy vehicle selection as a multi-attribute decision making problem. Firstly, we obtained text reviews and eight corresponding attribute star ratings for the five best-selling compact new energy vehicles (CNEVs) and the four best-selling mid–large new energy vehicles (MLNEVs) on the DCar website. Secondly, we designed adjustment rules to deal with the conflict of high star ratings but negative text reviews. Thirdly, we defined the concept of attribute richness and calculated attribute weights based on richness and dissimilarity. Finally, we classified consumers and recommended the optimal new energy vehicle for each consumer type. The results reveal that for CNEVs, consumers value comfort and the interior, while, for MLNEVs, they value power consumption and space. Then, for family-oriented consumers, car B and car H are recommended. For appearance-oriented and professional-oriented consumers, car A and car F are recommended. Finally, we conducted comparative and sensitivity analyses to ensure the effectiveness and robustness of the proposed approach.
There are some limitations. This study only obtained nine new energy vehicles from a single website and future research should consider the heterogeneity of websites and vehicles. Moreover, this study found a mismatch between the demands of review writers and review viewers. Consumer categorization and targeted recommendations were also conducted. However, future research should consider how to truly add the user attribute of the consumer type to websites, as well as conduct review filtering based on this.

Author Contributions

Conceptualization, S.Y. and X.Z.; methodology, S.Y., Z.D., Y.C. and X.Z.; software, X.Z.; validation, S.Y., Z.D. and Y.C.; formal analysis, X.Z.; investigation, Z.D. and X.Z.; data curation, S.Y., X.Z. and Z.D.; writing—original draft preparation, X.Z.; writing—review and editing, S.Y. and X.Z.; visualization, X.Z. and Z.D.; supervision, S.Y. and Y.C.; project administration, S.Y. and Y.C.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 71901151 and No. 71991461).

Data Availability Statement

Prospective Economist, China Internet Network Information Center, Ai Media Consulting. The authors are willing to release specific data to readers if requested. Please contact the corresponding author for details.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The list of acronyms and their explanations.
Table A1. The list of acronyms and their explanations.
AcronymExplanation
AHPAnalytic Hierarchical Process
CNEVCompact New Energy Vehicle
COPRASComplex Proportional Assessment
DESTDempster–Shafer Evidence Theory
DIDissimilarity
ELECTREElimination et Choix Traduisant la Realité, in French (Elimination and Choice Expressing Reality)
HFLTSHesitant Fuzzy Linguistic Term Set
LDALatent Dirichlet Allocation
LTSLinguistic Term Set
MLNEVMedium and Large New Energy Vehicle
MULTIMOORAMulti-Objective Optimization by Ratio Analysis plus the full Multiplicative Form
OWAOrdered Weighted Averaging
PLTSProbabilistic Linguistic Term Set
PROMETHEEPreference Ranking Organization Method for Enrichment of Evaluations
RIRichness
SRStar Ratings
TF-IDFTerm Frequency-Inverse Document Frequency
TODIMTomada de Decisao Interativa e Multicritévio, in Portuguese (Interactive and Multiple Criteria Decision Making)
TOPSISTechnique for Order Preference by Similarity to Ideal Solution
TRText Reviews
VIKORVise Kriterijumska Optimizacija Kompromisno Resenje, in Serbian (Multiple Criteria Optimization Compromise Solution)
WAWeighted Averaging
Table A2. The star rating decision matrix (of CNEVs, attributes 1–4).
Table A2. The star rating decision matrix (of CNEVs, attributes 1–4).
c 1 c 2 c 3 c 4
a 1 { s 1 ( 0.002 ) , s 2 ( 0.011 ) , s 3 ( 0.165 ) , s 4 ( 0.823 ) } { s 1 ( 0.001 ) , s 2 ( 0.021 ) , s 3 ( 0.129 ) , s 4 ( 0.850 ) } { s 1 ( 0.001 ) , s 2 ( 0.015 ) , s 3 ( 0.107 ) , s 4 ( 0.877 ) } { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 2 ( 0.020 ) , s 3 ( 0.137 ) , s 4 ( 0.840 ) }
a 2 { s 1 ( 0.009 ) , s 2 ( 0.035 ) , s 3 ( 0.333 ) , s 4 ( 0.623 ) } { s 1 ( 0.002 ) , s 2 ( 0.022 ) , s 3 ( 0.127 ) , s 4 ( 0.849 ) } { s 0 ( 0.001 ) , s 1 ( 0.007 ) , s 2 ( 0.018 ) , s 3 ( 0.219 ) , s 4 ( 0.756 ) } { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 2 ( 0.013 ) , s 3 ( 0.136 ) , s 4 ( 0.847 ) }
a 3 { s 1 ( 0.005 ) , s 2 ( 0.024 ) , s 3 ( 0.280 ) , s 4 ( 0.691 ) } { s 1 ( 0.003 ) , s 2 ( 0.018 ) , s 3 ( 0.190 ) , s 4 ( 0.789 ) } { s 0 ( 0.002 ) , s 1 ( 0.005 ) , s 2 ( 0.034 ) , s 3 ( 0.183 ) , s 4 ( 0.775 ) } { s 1 ( 0.005 ) , s 2 ( 0.023 ) , s 3 ( 0.144 ) , s 4 ( 0.828 ) }
a 4 { s 1 ( 0.001 ) , s 2 ( 0.009 ) , s 3 ( 0.215 ) , s 4 ( 0.775 ) } { s 2 ( 0.013 ) , s 3 ( 0.242 ) , s 4 ( 0.746 ) } { s 1 ( 0.005 ) , s 2 ( 0.081 ) , s 3 ( 0.304 ) , s 4 ( 0.610 ) } { s 0 ( 0.014 ) , s 1 ( 0.004 ) , s 2 ( 0.128 ) , s 3 ( 0.133 ) , s 4 ( 0.721 ) }
a 5 { s 0 ( 0.002 ) , s 1 ( 0.005 ) , s 2 ( 0.031 ) , s 3 ( 0.210 ) , s 4 ( 0.752 ) } { s 1 ( 0.002 ) , s 2 ( 0.030 ) , s 3 ( 0.205 ) , s 4 ( 0.763 ) } { s 0 ( 0.004 ) , s 1 ( 0.014 ) , s 2 ( 0.050 ) , s 3 ( 0.288 ) , s 4 ( 0.644 ) } { s 1 ( 0.008 ) , s 2 ( 0.022 ) , s 3 ( 0.236 ) , s 4 ( 0.734 ) }
Table A3. The star rating decision matrix (of CNEVs, attributes 5–8).
Table A3. The star rating decision matrix (of CNEVs, attributes 5–8).
c 5 c 6 c 7 c 8
a 1 { s 0 ( 0.003 ) , s 1 ( 0.010 ) , s 2 ( 0.049 ) , s 3 ( 0.420 ) , s 4 ( 0.518 ) } { s 2 ( 0.002 ) , s 3 ( 0.070 ) , s 4 ( 0.929 ) } { s 0 ( 0.002 ) , s 1 ( 0.003 ) , s 2 ( 0.025 ) , s 3 ( 0.246 ) , s 4 ( 0.725 ) } { s 1 ( 0.001 ) , s 2 ( 0.003 ) , s 3 ( 0.035 ) , s 4 ( 0.962 ) }
a 2 { s 0 ( 0.002 ) , s 1 ( 0.012 ) , s 2 ( 0.049 ) , s 3 ( 0.440 ) , s 4 ( 0.497 ) } { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 2 ( 0.008 ) , s 3 ( 0.098 ) , s 4 ( 0.889 ) } { s 0 ( 0.003 ) , s 1 ( 0.011 ) , s 2 ( 0.034 ) , s 3 ( 0.365 ) , s 4 ( 0.587 ) } { s 0 ( 0.001 ) , s 1 ( 0.004 ) , s 2 ( 0.019 ) , s 3 ( 0.093 ) , s 4 ( 0.883 ) }
a 3 { s 0 ( 0.011 ) , s 1 ( 0.031 ) , s 2 ( 0.082 ) , s 3 ( 0.453 ) , s 4 ( 0.423 ) } { s 1 ( 0.001 ) , s 2 ( 0.022 ) , s 3 ( 0.083 ) , s 4 ( 0.894 ) } { s 0 ( 0.002 ) , s 1 ( 0.015 ) , s 2 ( 0.081 ) , s 3 ( 0.222 ) , s 4 ( 0.680 ) } { s 0 ( 0.007 ) , s 1 ( 0.007 ) , s 2 ( 0.024 ) , s 3 ( 0.113 ) , s 4 ( 0.849 ) }
a 4 { s 1 ( 0.002 ) , s 2 ( 0.078 ) , s 3 ( 0.329 ) , s 4 ( 0.591 ) } { s 2 ( 0.005 ) , s 3 ( 0.190 ) , s 4 ( 0.805 ) } { s 1 ( 0.003 ) , s 2 ( 0.060 ) , s 3 ( 0.338 ) , s 4 ( 0.559 ) } { s 0 ( 0.001 ) , s 1 ( 0.004 ) , s 2 ( 0.046 ) , s 3 ( 0.242 ) , s 4 ( 0.707 ) }
a 5 { s 0 ( 0.002 ) , s 1 ( 0.019 ) , s 2 ( 0.088 ) , s 3 ( 0.383 ) , s 4 ( 0.508 ) } { s 1 ( 0.004 ) , s 2 ( 0.025 ) , s 3 ( 0.246 ) , s 4 ( 0.725 ) } { s 0 ( 0.006 ) , s 1 ( 0.037 ) , s 2 ( 0.056 ) , s 3 ( 0.298 ) , s 4 ( 0.603 ) } { s 1 ( 0.004 ) , s 2 ( 0.015 ) , s 3 ( 0.158 ) , s 4 ( 0.823 ) }
Table A4. The star rating decision matrix (of MLNEVs, attributes 1–4).
Table A4. The star rating decision matrix (of MLNEVs, attributes 1–4).
c 1 c 2 c 3 c 4
a 6 { s 2 ( 0.007 ) , s 3 ( 0.113 ) , s 4 ( 0.880 ) } { s 2 ( 0.005 ) , s 3 ( 0.123 ) , s 4 ( 0.872 ) } { s 1 ( 0.007 ) , s 2 ( 0.005 ) , s 3 ( 0.269 ) , s 4 ( 0.718 ) } { s 0 ( 0.002 ) , s 1 ( 0.004 ) , s 2 ( 0.011 ) , s 3 ( 0.226 ) , s 4 ( 0.756 ) }
a 7 { s 2 ( 0.006 ) , s 3 ( 0.076 ) , s 4 ( 0.918 ) } { s 2 ( 0.010 ) , s 3 ( 0.137 ) , s 4 ( 0.853 ) } { s 0 ( 0.002 ) , s 2 ( 0.031 ) , s 3 ( 0.266 ) , s 4 ( 0.702 ) } { s 1 ( 0.004 ) , s 2 ( 0.064 ) , s 3 ( 0.283 ) , s 4 ( 0.650 ) }
a 8 { s 1 ( 0.005 ) , s 2 ( 0.018 ) , s 3 ( 0.245 ) , s 4 ( 0.731 ) } { s 1 ( 0.001 ) , s 2 ( 0.002 ) , s 3 ( 0.067 ) , s 4 ( 0.930 ) } { s 1 ( 0.004 ) , s 2 ( 0.009 ) , s 3 ( 0.238 ) , s 4 ( 0.750 ) } { s 0 ( 0.001 ) , s 1 ( 0.002 ) , s 2 ( 0.029 ) , s 3 ( 0.213 ) , s 4 ( 0.755 ) }
a 9 { s 3 ( 0.143 ) , s 4 ( 0.857 ) } { s 2 ( 0.007 ) , s 3 ( 0.255 ) , s 4 ( 0.737 ) } { s 2 ( 0.051 ) , s 3 ( 0197 ) , s 4 ( 0.752 ) } { s 0 ( 0.001 ) , s 1 ( 0.001 ) , s 2 ( 0.009 ) , s 3 ( 0.188 ) , s 4 ( 0.801 ) }
Table A5. The star rating decision matrix (of MLNEVs, attributes 5–8).
Table A5. The star rating decision matrix (of MLNEVs, attributes 5–8).
c 5 c 6 c 7 c 8
a 6 { s 2 ( 0.011 ) , s 3 ( 0.190 ) , s 4 ( 0.798 ) } { s 3 ( 0.026 ) , s 4 ( 0.974 ) } { s 2 ( 0.010 ) , s 3 ( 0.228 ) , s 4 ( 0.762 ) } { s 0 ( 0.006 ) , s 1 ( 0.004 ) , s 2 ( 0.008 ) , s 3 ( 0.111 ) , s 4 ( 0.872 ) }
a 7 { s 0 ( 0.002 ) , s 2 ( 0.006 ) , s 3 ( 0.273 ) , s 4 ( 0.719 ) } { s 2 ( 0.006 ) , s 3 ( 0.190 ) , s 4 ( 0.805 ) } { s 2 ( 0.004 ) , s 3 ( 0.257 ) , s 4 ( 0.739 ) } { s 1 ( 0.002 ) , s 2 ( 0.012 ) , s 3 ( 0.093 ) , s 4 ( 0.893 ) }
a 8 { s 1 ( 0.002 ) , s 2 ( 0.013 ) , s 3 ( 0.323 ) , s 4 ( 0.662 ) } { s 1 ( 0.001 ) , s 2 ( 0.001 ) , s 3 ( 0.033 ) , s 4 ( 0.966 ) } { s 1 ( 0.004 ) , s 2 ( 0.012 ) , s 3 ( 0.198 ) , s 4 ( 0.786 ) } { s 0 ( 0.001 ) , s 1 ( 0.002 ) , s 2 ( 0.007 ) , s 3 ( 0.122 ) , s 4 ( 0.868 ) }
a 9 { s 1 ( 0.001 ) , s 2 ( 0.020 ) , s 3 ( 0.455 ) , s 4 ( 0.524 ) } { s 1 ( 0.001 ) , s 2 ( 0.005 ) , s 3 ( 0.136 ) , s 4 ( 0.858 ) } { s 1 ( 0.001 ) , s 2 ( 0.073 ) , s 3 ( 0.511 ) , s 4 ( 0.414 ) } { s 1 ( 0.001 ) , s 2 ( 0.007 ) , s 3 ( 0.239 ) , s 4 ( 0.753 ) }
Table A6. The text review decision matrix (of CNEVs, attributes 1–4).
Table A6. The text review decision matrix (of CNEVs, attributes 1–4).
c 1 c 2 c 3 c 4
a 1 { s 2 ( 0.04 ) , s 3 ( 0.05 ) , s 4 ( 0.60 ) , s 5 ( 0.23 ) , s 6 ( 0.07 ) } { s 2 ( 0.02 ) , s 3 ( 0.23 ) , s 4 ( 0.46 ) , s 5 ( 0.25 ) , s 6 ( 0.05 ) } { s 1 ( 0.01 ) , s 2 ( 0.19 ) , s 3 ( 0.11 ) , s 4 ( 0.45 ) , s 5 ( 0.17 ) , s 6 ( 0.06 ) } { s 1 ( 0.03 ) , s 2 ( 0.16 ) , s 3 ( 0.33 ) , s 4 ( 0.26 ) , s 5 ( 0.18 ) , s 6 ( 0.04 ) }
a 2 { s 2 ( 0.02 ) , s 3 ( 0.15 ) , s 4 ( 0.53 ) , s 5 ( 0.21 ) , s 6 ( 0.08 ) } { s 2 ( 0.01 ) , s 3 ( 0.31 ) , s 4 ( 0.33 ) , s 5 ( 0.28 ) , s 6 ( 0.07 ) } { s 1 ( 0.02 ) , s 2 ( 0.07 ) , s 3 ( 0.36 ) , s 4 ( 0.32 ) , s 5 ( 0.16 ) , s 6 ( 0.07 ) } { s 1 ( 0.02 ) , s 2 ( 0.16 ) , s 3 ( 0.59 ) , s 4 ( 0.16 ) , s 5 ( 0.06 ) , s 6 ( 0.04 ) }
a 3 { s 1 ( 0.01 ) , s 2 ( 0.04 ) , s 3 ( 0.11 ) , s 4 ( 0.56 ) , s 5 ( 0.20 ) , s 6 ( 0.09 ) } { s 2 ( 0.01 ) , s 3 ( 0.25 ) , s 4 ( 0.38 ) , s 5 ( 0.29 ) , s 6 ( 0.07 ) } { s 0 ( 0.01 ) , s 1 ( 0.01 ) , s 2 ( 0.07 ) , s 3 ( 0.17 ) , s 4 ( 0.46 ) , s 5 ( 0.19 ) , s 6 ( 0.09 ) } { s 0 ( 0.01 ) , s 1 ( 0.02 ) , s 2 ( 0.15 ) , s 3 ( 0.35 ) , s 4 ( 0.31 ) , s 5 ( 0.10 ) , s 6 ( 0.06 ) }
a 4 { s 1 ( 0.01 ) , s 2 ( 0.06 ) , s 3 ( 0.13 ) , s 4 ( 0.53 ) , s 5 ( 0.16 ) , s 6 ( 0.10 ) } { s 2 ( 0.03 ) , s 3 ( 0.28 ) , s 4 ( 0.44 ) , s 5 ( 0.19 ) , s 6 ( 0.06 ) } { s 1 ( 0.01 ) , s 2 ( 0.11 ) , s 3 ( 0.23 ) , s 4 ( 0.46 ) , s 5 ( 0.13 ) , s 6 ( 0.07 ) } { s 0 ( 0.01 ) , s 1 ( 0.03 ) , s 2 ( 0.21 ) , s 3 ( 0.33 ) , s 4 ( 0.32 ) , s 5 ( 0.06 ) , s 6 ( 0.04 ) }
a 5 { s 1 ( 0.01 ) , s 2 ( 0.05 ) , s 3 ( 0.12 ) , s 4 ( 0.55 ) , s 5 ( 0.18 ) , s 6 ( 0.07 ) } { s 2 ( 0.02 ) , s 3 ( 0.22 ) , s 4 ( 0.44 ) , s 5 ( 0.25 ) , s 6 ( 0.08 ) } { s 0 ( 0.01 ) , s 1 ( 0.01 ) , s 2 ( 0.06 ) , s 3 ( 0.19 ) , s 4 ( 0.49 ) , s 5 ( 0.16 ) , s 6 ( 0.08 ) } { s 0 ( 0.01 ) , s 1 ( 0.04 ) , s 2 ( 0.17 ) , s 3 ( 0.39 ) , s 4 ( 0.27 ) , s 5 ( 0.06 ) , s 6 ( 0.06 ) }
Table A7. The text review decision matrix (of CNEVs, attributes 5–8).
Table A7. The text review decision matrix (of CNEVs, attributes 5–8).
c 5 c 6 c 7 c 8
a 1 { s 2 ( 0.08 ) , s 3 ( 0.28 ) , s 4 ( 0.37 ) , s 5 ( 0.21 ) , s 6 ( 0.05 ) } { s 2 ( 0.01 ) , s 3 ( 0.28 ) , s 4 ( 0.28 ) , s 5 ( 0.25 ) , s 6 ( 0.18 ) } { s 1 ( 0.02 ) , s 2 ( 0.12 ) , s 3 ( 0.55 ) , s 4 ( 0.17 ) , s 5 ( 0.10 ) , s 6 ( 0.04 ) } { s 2 ( 0.05 ) , s 3 ( 0.37 ) , s 4 ( 0.32 ) , s 5 ( 0.19 ) , s 6 ( 0.08 ) }
a 2 { s 2 ( 0.04 ) , s 3 ( 0.51 ) , s 4 ( 0.25 ) , s 5 ( 0.14 ) , s 6 ( 0.06 ) } { s 2 ( 0.01 ) , s 3 ( 0.42 ) , s 4 ( 0.29 ) , s 5 ( 0.20 ) , s 6 ( 0.09 ) } { s 1 ( 0.01 ) , s 2 ( 0.08 ) , s 3 ( 0.60 ) , s 4 ( 0.15 ) , s 5 ( 0.09 ) , s 6 ( 0.06 ) } { s 1 ( 0.01 ) , s 2 ( 0.06 ) , s 3 ( 0.49 ) , s 4 ( 0.26 ) , s 5 ( 0.11 ) , s 6 ( 0.07 ) }
a 3 { s 2 ( 0.08 ) , s 3 ( 0.42 ) , s 4 ( 0.28 ) , s 5 ( 0.14 ) , s 6 ( 0.08 ) } { s 2 ( 0.02 ) , s 3 ( 0.28 ) , s 4 ( 0.27 ) , s 5 ( 0.30 ) , s 6 ( 0.12 ) } { s 1 ( 0.02 ) , s 2 ( 0.11 ) , s 3 ( 0.53 ) , s 4 ( 0.16 ) , s 5 ( 0.10 ) , s 6 ( 0.08 ) } { s 2 ( 0.06 ) , s 3 ( 0.46 ) , s 4 ( 0.26 ) , s 5 ( 0.13 ) , s 6 ( 0.09 ) }
a 4 { s 2 ( 0.02 ) , s 3 ( 0.39 ) , s 4 ( 0.37 ) , s 5 ( 0.17 ) , s 6 ( 0.05 ) } { s 3 ( 0.34 ) , s 4 ( 0.33 ) , s 5 ( 0.24 ) , s 6 ( 0.10 ) } { s 2 ( 0.06 ) , s 3 ( 0.44 ) , s 4 ( 0.27 ) , s 5 ( 0.16 ) , s 6 ( 0.06 ) } { s 1 ( 0.01 ) , s 2 ( 0.06 ) , s 3 ( 0.35 ) , s 4 ( 0.40 ) , s 5 ( 0.09 ) , s 6 ( 0.09 ) }
a 5 { s 1 ( 0.01 ) , s 2 ( 0.07 ) , s 3 ( 0.35 ) , s 4 ( 0.35 ) , s 5 ( 0.17 ) , s 6 ( 0.06 ) } { s 3 ( 0.34 ) , s 4 ( 0.31 ) , s 5 ( 0.22 ) , s 6 ( 0.13 ) } { s 1 ( 0.01 ) , s 2 ( 0.10 ) , s 3 ( 0.34 ) , s 4 ( 0.28 ) , s 5 ( 0.15 ) , s 6 ( 0.11 ) } { s 2 ( 0.05 ) , s 3 ( 0.30 ) , s 4 ( 0.39 ) , s 5 ( 0.16 ) , s 6 ( 0.09 ) }
Table A8. The text review decision matrix (of MLNEVs, attributes 1–4).
Table A8. The text review decision matrix (of MLNEVs, attributes 1–4).
c 1 c 2 c 3 c 4
a 6 { s 2 ( 0.02 ) , s 3 ( 0.11 ) , s 4 ( 0.46 ) , s 5 ( 0.29 ) , s 6 ( 0.12 ) } { s 2 ( 0.01 ) , s 3 ( 0.27 ) , s 4 ( 0.34 ) , s 5 ( 0.30 ) , s 6 ( 0.09 ) } { s 1 ( 0.01 ) , s 2 ( 0.07 ) , s 3 ( 0.28 ) , s 4 ( 0.36 ) , s 5 ( 0.21 ) , s 6 ( 0.07 ) } { s 0 ( 0.01 ) , s 1 ( 0.04 ) , s 2 ( 0.19 ) , s 3 ( 0.37 ) , s 4 ( 0.25 ) , s 5 ( 0.07 ) , s 6 ( 0.07 ) }
a 7 { s 2 ( 0.03 ) , s 3 ( 0.13 ) , s 4 ( 0.28 ) , s 5 ( 0.33 ) , s 6 ( 0.22 ) } { s 2 ( 0.30 ) , s 3 ( 0.30 ) , s 4 ( 0.31 ) , s 5 ( 0.08 ) } { s 1 ( 0.03 ) , s 2 ( 0.08 ) , s 3 ( 0.20 ) , s 4 ( 0.45 ) , s 5 ( 0.16 ) , s 6 ( 0.08 ) } { s 0 ( 0.03 ) , s 1 ( 0.04 ) , s 2 ( 0.10 ) , s 3 ( 0.50 ) , s 4 ( 0.22 ) , s 5 ( 0.05 ) , s 6 ( 0.06 ) }
a 8 { s 2 ( 0.03 ) , s 3 ( 0.19 ) , s 4 ( 0.51 ) , s 5 ( 0.23 ) , s 6 ( 0.05 ) } { s 2 ( 0.01 ) , s 3 ( 0.40 ) , s 4 ( 0.32 ) , s 5 ( 0.23 ) , s 6 ( 0.04 ) } { s 1 ( 0.01 ) , s 2 ( 0.07 ) , s 3 ( 0.31 ) , s 4 ( 0.38 ) , s 5 ( 0.16 ) , s 6 ( 0.07 ) } { s 0 ( 0.01 ) , s 1 ( 0.03 ) , s 2 ( 0.10 ) , s 3 ( 0.47 ) , s 4 ( 0.28 ) , s 5 ( 0.08 ) , s 6 ( 0.04 ) }
a 9 { s 2 ( 0.01 ) , s 3 ( 0.14 ) , s 4 ( 0.50 ) , s 5 ( 0.29 ) , s 6 ( 0.05 ) } { s 2 ( 0.01 ) , s 3 ( 0.34 ) , s 4 ( 0.38 ) , s 5 ( 0.23 ) , s 6 ( 0.04 ) } { s 0 ( 0.04 ) , s 2 ( 0.07 ) , s 3 ( 0.30 ) , s 4 ( 0.34 ) , s 5 ( 0.20 ) , s 6 ( 0.04 ) } { s 1 ( 0.01 ) , s 2 ( 0.07 ) , s 3 ( 0.66 ) , s 4 ( 0.17 ) , s 5 ( 0.03 ) , s 6 ( 0.06 ) }
Table A9. The text review decision matrix (of MLNEVs, attributes 5–8).
Table A9. The text review decision matrix (of MLNEVs, attributes 5–8).
c 5 c 6 c 7 c 8
a 6 { s 2 ( 0.02 ) , s 3 ( 0.44 ) , s 4 ( 0.30 ) , s 5 ( 0.18 ) , s 6 ( 0.06 ) } { s 2 ( 0.01 ) , s 3 ( 0.25 ) , s 4 ( 0.28 ) , s 5 ( 0.34 ) , s 6 ( 0.11 ) } { s 1 ( 0.01 ) , s 2 ( 0.09 ) , s 3 ( 0.47 ) , s 4 ( 0.20 ) , s 5 ( 0.16 ) , s 6 ( 0.07 ) } { s 2 ( 0.05 ) , s 3 ( 0.62 ) , s 4 ( 0.21 ) , s 5 ( 0.07 ) , s 6 ( 0.05 ) }
a 7 { s 2 ( 0.05 ) , s 3 ( 0.34 ) , s 4 ( 0.30 ) , s 5 ( 0.19 ) , s 6 ( 0.12 ) } { s 2 ( 0.03 ) , s 3 ( 0.44 ) , s 4 ( 0.12 ) , s 5 ( 0.22 ) , s 6 ( 0.18 ) } { s 1 ( 0.01 ) , s 2 ( 0.05 ) , s 3 ( 0.51 ) , s 4 ( 0.18 ) , s 5 ( 0.18 ) , s 6 ( 0.05 ) } { s 2 ( 0.05 ) , s 3 ( 0.57 ) , s 4 ( 0.23 ) , s 5 ( 0.09 ) , s 6 ( 0.06 ) }
a 8 { s 2 ( 0.05 ) , s 3 ( 0.53 ) , s 4 ( 0.28 ) , s 5 ( 0.11 ) , s 6 ( 0.03 ) } { s 2 ( 0.02 ) , s 3 ( 0.31 ) , s 4 ( 0.29 ) , s 5 ( 0.22 ) , s 6 ( 0.16 ) } { s 2 ( 0.08 ) , s 3 ( 0.64 ) , s 4 ( 0.15 ) , s 5 ( 0.09 ) , s 6 ( 0.04 ) } { s 2 ( 0.04 ) , s 3 ( 0.59 ) , s 4 ( 0.25 ) , s 5 ( 0.08 ) , s 6 ( 0.04 ) }
a 9 { s 2 ( 0.02 ) , s 3 ( 0.46 ) , s 4 ( 0.27 ) , s 5 ( 0.19 ) , s 6 ( 0.05 ) } { s 2 ( 0.01 ) , s 3 ( 0.43 ) , s 4 ( 0.20 ) , s 5 ( 0.24 ) , s 6 ( 0.11 ) } { s 2 ( 0.03 ) , s 3 ( 0.59 ) , s 4 ( 0.19 ) , s 5 ( 0.12 ) , s 6 ( 0.06 ) } { s 2 ( 0.02 ) , s 3 ( 0.65 ) , s 4 ( 0.21 ) , s 5 ( 0.07 ) , s 6 ( 0.04 ) }

References

  1. Meng, W.D.; Ma, M.M.; Li, Y.Y.; Huang, B. New energy vehicle R&D strategy with supplier capital constraints under China’s dual credit policy. Energy Policy 2022, 168, 113099. [Google Scholar] [CrossRef]
  2. He, S.F.; Wang, Y.M. Evaluating new energy vehicles by picture fuzzy sets based on sentiment analysis from online reviews. Artif. Intell. Rev. 2023, 56, 2171–2192. [Google Scholar] [CrossRef]
  3. Cai, B.W. Deep Learning-Based Economic Forecasting for the New Energy Vehicle Industry. J. Math. 2021, 2021, 3870657. [Google Scholar] [CrossRef]
  4. Hua, Y.F.; Dong, F. How can new energy vehicles become qualified relays from the perspective of carbon neutralization? Literature review and research prospect based on the CiteSpace knowledge map. Environ. Sci. Pollut. Res. Int. 2022, 29, 55473–55491. [Google Scholar] [CrossRef]
  5. Wu, D.S.; Xie, Y.; Lyu, X.Y. The impacts of heterogeneous traffic regulation on air pollution: Evidence from China. Transp. Res. Part D Transp. Environ. 2022, 109, 103388. [Google Scholar] [CrossRef]
  6. Jiang, C.Q.; Duan, R.; Jain, H.K.; Liu, S.X.; Liang, K. Hybrid collaborative filtering for high-involvement products: A solution to opinion sparsity and dynamics. Decis. Support Syst. 2015, 79, 195–208. [Google Scholar] [CrossRef]
  7. Lin, B.Q.; Shi, L. Do environmental quality and policy changes affect the evolution of consumers’ intentions to buy new energy vehicles. Appl. Energy 2022, 310, 118582. [Google Scholar] [CrossRef]
  8. Abrahams, A.S.; Jiao, J.; Fan, W.G.; Wang, G.A.; Zhang, Z.J. What’s buzzing in the blizzard of buzz? Automotive component isolation in social media postings. Decis. Support Syst. 2013, 55, 871–882. [Google Scholar] [CrossRef]
  9. Xu, Z.G.; Dang, Y.Z.; Wang, Q.W. Potential buyer identification and purchase likelihood quantification by mining user-generated content on social media. Expert Syst. Appl. 2022, 187, 115899. [Google Scholar] [CrossRef]
  10. Liu, H.F.; Jayawardhena, C.; Osburg, V.-S.; Mohiuddin Babu, M. Do online reviews still matter post-purchase? Internet Res. 2020, 30, 109–139. [Google Scholar] [CrossRef]
  11. Yang, J.; Sarathy, R.; Lee, J. The effect of product review balance and volume on online Shoppers’ risk perception and purchase intention. Decis. Support Syst. 2016, 89, 66–76. [Google Scholar] [CrossRef]
  12. Soll, J.B.; Larrick, R.P. Strategies for Revising Judgment: How (and How Well) People Use Others’ Opinions. J. Exp. Psychol. Learn. Mem. Cogn. 2009, 35, 780–805. [Google Scholar] [CrossRef] [PubMed]
  13. Monaro, M.; Cannonito, E.; Gamberini, L.; Sartori, G. Spotting faked 5 stars ratings in E-Commerce using mouse dynamics. Comput. Hum. Behav. 2020, 109, 106348. [Google Scholar] [CrossRef]
  14. Zadeh, L.A. FUZZY SETS. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  15. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  16. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—II. Inf. Sci. 1975, 8, 301–357. [Google Scholar] [CrossRef]
  17. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—III. Inf. Sci. 1975, 9, 43–80. [Google Scholar] [CrossRef]
  18. Rodríguez, R.; Martinez, L.; Herrera, F. Hesitant Fuzzy Linguistic Term Sets for Decision Making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  19. Zhu, B.; Xu, Z. Consistency Measures for Hesitant Fuzzy Linguistic Preference Relations. IEEE Trans. Fuzzy Syst. 2014, 22, 35–45. [Google Scholar] [CrossRef]
  20. Wei, C.P. A Multigranularity Linguistic Group Decision-Making Method Based on Hesitant 2-Tuple Sets. Int. J. Intell. Syst. 2016, 31, 612–634. [Google Scholar]
  21. Pang, Q.; Wang, H.; Xu, Z.S. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  22. Wang, Z.H.; Dong, X.Y. Determinants and policy implications of residents’ new energy vehicle purchases: The evidence from China. Nat. Hazards 2016, 82, 155–173. [Google Scholar] [CrossRef]
  23. Ma, S.C.; Fan, Y.; Feng, L.Y. An evaluation of government incentives for new energy vehicles in China focusing on vehicle purchasing restrictions. Energy Policy 2017, 110, 609–618. [Google Scholar] [CrossRef]
  24. Zhao, H.B.; Bai, R.B.; Liu, R.; Wang, H. Exploring purchase intentions of new energy vehicles: Do “mianzi” and green peer influence matter? Front. Psychol. 2022, 13, 951132. [Google Scholar] [CrossRef] [PubMed]
  25. Yetano Roche, M.; Mourato, S.; Fischedick, M.; Pietzner, K.; Viebahn, P. Public attitudes towards and demand for hydrogen and fuel cell vehicles: A review of the evidence and methodological implications. Energy Policy 2010, 38, 5301–5310. [Google Scholar] [CrossRef]
  26. Cai, M.S.; Tan, Y.J.; Ge, B.F.; Dou, Y.J.; Huang, G.; Du, Y.H. PURA: A Product-and-User Oriented Approach for Requirement Analysis from Online Reviews. IEEE Syst. J. 2022, 16, 566–577. [Google Scholar] [CrossRef]
  27. Liu, G.X.; Fan, S.Q.; Tu, Y.; Wang, G.J. Innovative Supplier Selection from Collaboration Perspective with a Hybrid MCDM Model: A Case Study Based on NEVs Manufacturer. Symmetry Basel 2021, 13, 143. [Google Scholar] [CrossRef]
  28. Nicolalde, J.F.; Cabrera, M.; Martinez-Gomez, J.; Salazar, R.B.; Reyes, E. Selection of a phase change material for energy storage by multi-criteria decision method regarding the thermal comfort in a vehicle. J. Energy Storage 2022, 51, 104437. [Google Scholar] [CrossRef]
  29. Yu, S.M.; Du, Z.J.; Wang, J.Q.; Luo, H.Y.; Lin, X.D. Trust and behavior analysis-based fusion method for heterogeneous multiple attribute group decision-making. Comput. Ind. Eng. 2021, 152, 106992. [Google Scholar] [CrossRef]
  30. Yu, S.M.; Du, Z.J.; Zhang, X.Y.; Luo, H.Y.; Lin, X.D. Trust Cop-Kmeans Clustering Analysis and Minimum-Cost Consensus Model Considering Voluntary Trust Loss in Social Network Large-Scale Decision-Making. IEEE Trans. Fuzzy Syst. 2022, 30, 2634–2648. [Google Scholar] [CrossRef]
  31. Yu, S.M.; Zhang, X.T.; Du, Z.J. Enhanced Minimum-Cost Consensus: Focusing on Overadjustment and Flexible Consensus Cost. Inf. Fusion 2023, 89, 336–354. [Google Scholar] [CrossRef]
  32. Zheng, J.; Wang, Y.M.; Zhang, K. Solution of heterogeneous multi-attribute case-based decision making problems by using method based on TODIM. Soft Comput. 2020, 24, 7081–7091. [Google Scholar] [CrossRef]
  33. Yu, S.M.; Du, Z.J.; Xu, X.H. Hierarchical Punishment-Driven Consensus Model for Probabilistic Linguistic Large-Group Decision Making with Application to Global Supplier Selection. Group Decis. Negot. 2021, 30, 1343–1372. [Google Scholar] [CrossRef]
  34. Huang, J.Y.; Jiang, N.Y.; Chen, J.; Balezentis, T.; Streimikiene, D. Multi-criteria group decision-making method for green supplier selection based on distributed interval variables. Econ. Res. Ekon. Istraživanja 2022, 35, 746–761. [Google Scholar] [CrossRef]
  35. Yu, S.M.; Wang, J.; Wang, J.Q.; Li, L. A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Appl. Soft Comput. 2018, 67, 741–755. [Google Scholar] [CrossRef]
  36. Giri, B.C.; Molla, M.U.; Biswas, P. TOPSIS Method for Neutrosophic Hesitant Fuzzy Multi-Attribute Decision Making. Informatica 2020, 31, 35–63. [Google Scholar] [CrossRef]
  37. Nobre, F.F.; Trotta, L.T.F.; Gomes, L. Multi-criteria decision making—An approach to setting priorities in health care. Stat. Med. 1999, 18, 3345–3354. [Google Scholar] [CrossRef]
  38. Opricovic, S.; Tzeng, G.H. Multicriteria planning of post-earthquake sustainable reconstruction. Comput. Aided Civ. Infrastruct. Eng. 2002, 17, 211–220. [Google Scholar] [CrossRef]
  39. Roy, B. The outranking approach and the foundations of electre methods. Theory Decis. 1991, 31, 49–73. [Google Scholar] [CrossRef]
  40. Brauers, W.K.M.; Zavadskas, E.K. Project management by multimoora as an instrument for transition economies. Technol. Econ. Dev. Econ. 2010, 16, 5–24. [Google Scholar] [CrossRef]
  41. Agrebi, M.; Abed, M. Decision-making from multiple uncertain experts: Case of distribution center location selection. Soft Comput. 2021, 25, 4525–4544. [Google Scholar] [CrossRef]
  42. Zhang, K.; Zheng, J.; Wang, Y.M. A heterogeneous multi-attribute case retrieval method based on neutrosophic sets and TODIM for emergency situations. Appl. Intell. 2022, 52, 15177–15192. [Google Scholar] [CrossRef] [PubMed]
  43. Wen, Z.; Xiong, Z.M.; Lu, H.; Xia, Y.P. Optimisation of Treatment Scheme for Water Inrush Disaster in Tunnels Based on Fuzzy Multi-criteria Decision-Making in an Uncertain Environment. Arab. J. Sci. Eng. 2019, 44, 8249–8263. [Google Scholar] [CrossRef]
  44. Wang, S.L.; Qu, S.J.; Goh, M.; Wahab, M.I.M.; Zhou, H. Integrated Multi-stage Decision-Making for Winner Determination Problem in Online Multi-attribute Reverse Auctions under Uncertainty. Int. J. Fuzzy Syst. 2019, 21, 2354–2372. [Google Scholar] [CrossRef]
  45. Abu Dabous, S.; Zeiada, W.; Zayed, T.; Al-Ruzouq, R. Sustainability-informed multi-criteria decision support framework for ranking and prioritization of pavement sections. J. Clean. Prod. 2020, 244, 118755. [Google Scholar] [CrossRef]
  46. Yang, Y.P.; Liu, Z.Q.; Chen, H.M.; Wang, Y.Q.; Yuan, G.H. Evaluating Regional Eco-Green Cooperative Development Based on a Heterogeneous Multi-Criteria Decision-Making Model: Example of the Yangtze River Delta Region. Sustainability 2020, 12, 3029. [Google Scholar] [CrossRef]
  47. Wang, J.Q.; Zhang, X.H. A Novel Multi-Criteria Decision-Making Method Based on Rough Sets and Fuzzy Measures. Axioms 2022, 11, 275. [Google Scholar] [CrossRef]
  48. Fan, Z.P.; Li, G.M.; Liu, Y. Processes and methods of information fusion for ranking products based on online reviews: An overview. Inf. Fusion 2020, 60, 87–97. [Google Scholar] [CrossRef]
  49. Fan, Z.P.; Xi, Y.; Liu, Y. Supporting consumer’s purchase decision: A method for ranking products based on online multi-attribute product ratings. Soft Comput. 2018, 22, 5247–5261. [Google Scholar] [CrossRef]
  50. Liu, P.D.; Teng, F. Probabilistic linguistic TODIM method for selecting products through online product reviews. Inf. Sci. 2019, 485, 441–455. [Google Scholar] [CrossRef]
  51. Bi, J.W.; Liu, Y.; Fan, Z.P. Representing sentiment analysis results of online reviews using interval type-2 fuzzy numbers and its application to product ranking. Inf. Sci. 2019, 504, 293–307. [Google Scholar] [CrossRef]
  52. Sharma, H.; Tandon, A.; Kapur, P.K.; Aggarwal, A.G. Ranking hotels using aspect ratings based sentiment classification and interval-valued neutrosophic TOPSIS. Int. J. Syst. Assur. Eng. Manag. 2019, 10, 973–983. [Google Scholar] [CrossRef]
  53. Zhang, C.X.; Zhao, M.; Cai, M.Y.; Xiao, Q.R. Multi-stage multi-attribute decision making method based on online reviews for hotel selection considering the aspirations with different development speeds. Comput. Ind. Eng. 2020, 143, 106421. [Google Scholar] [CrossRef]
  54. Zhang, D.; Li, Y.L.; Wu, C. An extended TODIM method to rank products with online reviews under intuitionistic fuzzy environment. J. Oper. Res. Soc. 2020, 71, 322–334. [Google Scholar] [CrossRef]
  55. Zhang, C.; Tian, Y.X.; Fan, L.W.; Li, Y.H. Customized ranking for products through online reviews: A method incorporating prospect theory with an improved VIKOR. Appl. Intell. 2020, 50, 1725–1744. [Google Scholar] [CrossRef]
  56. Song, Y.M.; Li, G.X.; Li, T.; Li, Y.H. A purchase decision support model considering consumer personalization about aspirations and risk attitudes. J. Retail. Consum. Serv. 2021, 63, 102728. [Google Scholar] [CrossRef]
  57. Dahooie, J.H.; Raafat, R.; Qorbani, A.R.; Daim, T. An intuitionistic fuzzy data-driven product ranking model using sentiment analysis and multi-criteria decision-making. Technol. Forecast. Soc. Chang. 2021, 173, 121158. [Google Scholar] [CrossRef]
  58. Yang, Z.L.; Gao, Y.; Fu, X.L. A decision-making algorithm combining the aspect-based sentiment analysis and intuitionistic fuzzy-VIKOR for online hotel reservation. Ann. Oper. Res. 2021. [Google Scholar] [CrossRef] [PubMed]
  59. Qin, J.D.; Zeng, M.Z. An integrated method for product ranking through online reviews based on evidential reasoning theory and stochastic dominance. Inf. Sci. 2022, 612, 37–61. [Google Scholar] [CrossRef]
  60. Bi, J.W.; Han, T.Y.; Yao, Y.B.; Li, H. Ranking hotels through multi-dimensional hotel information: A method considering travelers’ preferences and expectations. Inf. Technol. Tour. 2022, 24, 127–155. [Google Scholar] [CrossRef]
  61. Tayal, D.K.; Yadav, S.K.; Arora, D. Personalized ranking of products using aspect-based sentiment analysis and Plithogenic sets. Multimed. Tools Appl. 2022, 82, 1261–1287. [Google Scholar] [CrossRef]
  62. Tversky, A.; Kahneman, D. Judgment under uncertainty—Heuristics and biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
  63. Stanovich, K.E.; West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behav. Brain Sci. 2000, 23, 645–726. [Google Scholar] [CrossRef] [PubMed]
  64. Cho, H.S.; Sosa, M.E.; Hasija, S. Reading Between the Stars: Understanding the Effects of Online Customer Reviews on Product Demand. MSom Manuf. Serv. Oper. Manag. 2021, 24, 1887–2386. [Google Scholar]
  65. Gavilan, D.; Avello, M.; Martinez-Navarro, G. The influence of online ratings and reviews on hotel booking consideration. Tour. Manag. 2018, 66, 53–61. [Google Scholar] [CrossRef]
  66. Hong, S.; Pittman, M. eWOM anatomy of online product reviews: Interaction effects of review number, valence, and star ratings on perceived credibility. Int. J. Advert. 2020, 39, 892–920. [Google Scholar] [CrossRef]
  67. Hwang, C.-L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications—A State-of-the-Art Survey. In Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  68. Chen, X.H.; Zhang, W.W.; Xu, X.H.; Cao, W.Z. Managing Group Confidence and Consensus in Intuitionistic Fuzzy Large Group Decision-Making Based on Social Media Data Mining. Group Decis. Negot. 2022, 31, 995–1023. [Google Scholar] [CrossRef]
  69. Hu, J.H.; Zhang, X.H.; Yang, Y.; Liu, Y.M.; Chen, X.H. New doctors ranking system based on VIKOR method. Int. Trans. Oper. Res. 2020, 27, 1236–1261. [Google Scholar] [CrossRef]
  70. Li, Y.; Zhang, Y.X.; Xu, Z.S. A Decision-Making Model Under Probabilistic Linguistic Circumstances with Unknown Criteria Weights for Online Customer Reviews. Int. J. Fuzzy Syst. 2020, 22, 777–789. [Google Scholar] [CrossRef]
  71. Yen, J. Generalizing the dempster shafer theory to fuzzy-sets. IEEE Trans. Syst. Man Cybern. 1990, 20, 559–570. [Google Scholar] [CrossRef]
Figure 1. The proposed method for the product ranking problem.
Figure 1. The proposed method for the product ranking problem.
Mathematics 11 02037 g001
Figure 2. Four states of linguistic terms.
Figure 2. Four states of linguistic terms.
Mathematics 11 02037 g002
Figure 3. Ranking results based on different values of parameters (CNEVs).
Figure 3. Ranking results based on different values of parameters (CNEVs).
Mathematics 11 02037 g003
Figure 4. The legend of Figure 3.
Figure 4. The legend of Figure 3.
Mathematics 11 02037 g004
Figure 5. Ranking results based on different values of parameters (MLNEVs).
Figure 5. Ranking results based on different values of parameters (MLNEVs).
Mathematics 11 02037 g005
Figure 6. The legend of Figure 5.
Figure 6. The legend of Figure 5.
Mathematics 11 02037 g006
Table 1. Recent studies on product ranking based on multi-attribute decision making methods.
Table 1. Recent studies on product ranking based on multi-attribute decision making methods.
AuthorsYearStar RatingsText ReviewsApplicationContribution Phase(s)
Fan et al. [49]2018AutomobileData processing, product ranking
Liu and Teng [50]2019AutomobileWeight calculation
Bi et al. [51]2019AutomobileData processing, sentiment analysis
Sharma et al. [52]2019HotelWeight calculation
Zhang et al. [53]2020HotelData processing, product ranking
Zhang et al. [54]2020Mobile phoneSentiment analysis, weight calculation, product ranking
Zhang et al. [55]2020AutomobileSentiment analysis, weight calculation
Song et al. [56]2021AutomobileWeight calculation, product ranking
Dahooie et al. [57]2021Mobile phoneOthers
Yang et al. [58]2021HotelData processing
Qin and Zeng [59]2022ComputerSentiment analysis
Bi et al. [60]2022HotelOthers
Tayal et al. [61]2022HotelData processing, product ranking
Table 2. The designed adjustment rules.
Table 2. The designed adjustment rules.
s 0 s 1 s 2 s 3 s 4 s 5 s 6
s 0 s 0 s 0 s 0 s 0 s 0 s 0 s 0
s 1 s 0 s 1 s 1 s 1 s 1 s 1 s 1
s 2 s 1 s 1.33 s 1.67 s 2 s 2 s 2 s 2
s 3 s 2 s 2.25 s 2.5 s 2.75 s 3 s 3 s 3
s 4 s 3 s 3.17 s 3.33 s 3.5 s 3.67 s 3.83 s 4
Table 3. Top 9 new energy vehicles and their corresponding number of ratings/reviews.
Table 3. Top 9 new energy vehicles and their corresponding number of ratings/reviews.
TypeVehicleNumber of Ratings/Reviews
Compact New Energy Vehicle (CNEV)Car A718
Car B931
Car C585
Car D509
Car E484
Medium and Large New Energy Vehicle (MLNEV)Car F597
Car G209
Car H1108
Car I1005
Table 4. Consumer demands and their corresponding high-frequency words.
Table 4. Consumer demands and their corresponding high-frequency words.
Vehicle TypeConsumer TypeHigh-Frequency Words
CNEVFamily-orientedBack, seats, body, place, compact, car, kids, price, mobility, electric car, saving money, small car, trunk, urban, exterior, electricity consumption, wife, mileage, and air conditioning.
Appearance-orientedCar, exterior, interior, design, cost, seats, overall, new energy, fashion, value, body, price, back, steering wheel, style, fuel, technology, urban, and sound insulation.
Professional-orientedRear, fuel consumption, mode, seats, steering wheel, trunk, air conditioning, sport, exterior, storage, brakes, features, battery, energy, fuel efficiency, automatic, engine, chassis, and dashboard.
MLNEVExperiential-oriented Service, experiential, battery, change, free, mode, automatic, power station, assist, function, seat, friend, smart, system, electric, air conditioning, upgrade, ideal, and sport.
Appearance-orientedExterior, design, interior, beauty, function, overall, body, shaped, technology, rear, cost-efficiency, stylish, grand, models, seats, lines, color, style, and headlights.
Professional-orientedRear, trunk, seats, steering wheel, power consumption, storage, chassis, sound insulation, overall, effect, body, fuel, design, legs, urban, exterior, brakes, head, and space design.
Performance-orientedMode, fuel consumption, rear, hybrid, exterior, steering wheel, trunk, fuel, new energy, interior, fuel efficient, seats, engine, mileage, brakes, energy consumption, and car.
Table 5. The number of ratings/reviews under different consumer types for CNEVs.
Table 5. The number of ratings/reviews under different consumer types for CNEVs.
VehicleFamily-OrientedAppearance-OrientedProfessional-Oriented
Car A51511687
Car B156288487
Car C28387215
Car D31319159
Car E179160145
Table 6. The number of ratings/reviews under different consumer types for MLNEVs.
Table 6. The number of ratings/reviews under different consumer types for MLNEVs.
VehicleExperiential-OrientedAppearance-OrientedProfessional-OrientedPerformance-Oriented
Car F7120727643
Car G128103932
Car H20741146129
Car I671254674
Table 7. The comprehensive probabilistic linguistic decision matrix (alternative 1).
Table 7. The comprehensive probabilistic linguistic decision matrix (alternative 1).
a 1
c 1 { s 1 ( 0.002 ) , s 2 ( 0.011 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.009 ) , s 2.75 ( 0.014 ) , s 3 ( 0.141 ) , s 3.33 ( 0.030 ) , s 3.5 ( 0.038 ) , s 3.67 ( 0.489 ) , s 3.83 ( 0.208 ) , s 4 ( 0.058 ) }
c 2 { s 1 ( 0.001 ) , s 1.67 ( 0.001 ) , s 2 ( 0.021 ) , s 2.5 ( 0.006 ) , s 2.75 ( 0.036 ) , s 3 ( 0.086 ) , s 3.33 ( 0.012 ) , s 3.5 ( 0.178 ) , s 3.67 ( 0.402 ) , s 3.83 ( 0.210 ) , s 4 ( 0.048 ) }
c 3 { s 1 ( 0.002 ) , s 1.67 ( 0.003 ) , s 2 ( 0.011 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.020 ) , s 2.75 ( 0.014 ) , s 3 ( 0.072 ) , s 3.17 ( 0.010 ) , s 3.33 ( 0.100 ) , s 3.5 ( 0.110 ) , s 3.67 ( 0.441 ) , s 3.83 ( 0.158 ) , s 4 ( 0.056 ) }
c 4 { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 1.33 ( 0.001 ) , s 1.67 ( 0.003 ) , s 2 ( 0.016 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.033 ) , s 2.75 ( 0.052 ) , s 3 ( 0.050 ) , s 3.17 ( 0.028 ) , s 3.33 ( 0.143 ) , s 3.5 ( 0.302 ) , s 3.67 ( 0.237 ) , s 3.83 ( 0.088 ) , s 4 ( 0.040 ) }
c 5 { s 0 ( 0.003 ) , s 1 ( 0.010 ) , s 1.33 ( 0.001 ) , s 1.67 ( 0.009 ) , s 2 ( 0.042 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.046 ) , s 2.75 ( 0.126 ) , s 3 ( 0.244 ) , s 3.33 ( 0.033 ) , s 3.5 ( 0.168 ) , s 3.67 ( 0.214 ) , s 3.83 ( 0.074 ) , s 4 ( 0.029 ) }
c 6 { s 2 ( 0.002 ) , s 2.75 ( 0.018 ) , s 3 ( 0.052 ) , s 3.33 ( 0.015 ) , s 3.5 ( 0.288 ) , s 3.67 ( 0.289 ) , s 3.83 ( 0.248 ) , s 4 ( 0.089 ) }
c 7 { s 0 ( 0.002 ) , s 1 ( 0.003 ) , s 1.33 ( 0.001 ) , s 1.67 ( 0.003 ) , s 2 ( 0.022 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.032 ) , s 2.75 ( 0.145 ) , s 3 ( 0.065 ) , s 3.17 ( 0.014 ) , s 3.33 ( 0.079 ) , s 3.5 ( 0.388 ) , s 3.67 ( 0.126 ) , s 3.83 ( 0.085 ) , s 4 ( 0.031 ) }
c 8 { s 1 ( 0.001 ) , s 1.67 s 2 ( 0.003 ) , s 2.5 ( 0.003 ) , s 2.75 ( 0.012 ) , s 3 ( 0.020 ) , s 3.33 ( 0.042 ) , s 3.5 ( 0.353 ) , s 3.67 ( 0.308 ) , s 3.83 ( 0.181 ) , s 4 ( 0.077 ) }
Table 8. The comprehensive probabilistic linguistic decision matrix (alternative 2).
Table 8. The comprehensive probabilistic linguistic decision matrix (alternative 2).
a 2
c 1 { s 1 ( 0.009 ) , s 1.33 ( 0.001 ) , s 2 ( 0.035 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.051 ) , s 3 ( 0.275 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.014 ) , s 3.5 ( 0.093 ) , s 3.67 ( 0.335 ) , s 3.83 ( 0.131 ) , s 4 ( 0.049 ) }
c 2 { s 1 ( 0.002 ) , s 2 ( 0.022 ) , s 2.75 ( 0.035 ) , s 3 ( 0.092 ) , s 3.33 ( 0.013 ) , s 3.5 ( 0.273 ) , s 3.67 ( 0.283 ) , s 3.83 ( 0.228 ) , s 4 ( 0.051 ) }
c 3 { s 0 ( 0.001 ) , s 1 ( 0.007 ) , s 2 ( 0.018 ) , s 2.5 ( 0.027 ) , s 2.75 ( 0.072 ) , s 3 ( 0.121 ) , s 3.17 ( 0.016 ) , s 3.33 ( 0.042 ) , s 3.5 ( 0.279 ) , s 3.67 ( 0.248 ) , s 3.83 ( 0.114 ) , s 4 ( 0.056 ) }
c 4 { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 1.67 ( 0.005 ) , s 2 ( 0.008 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.013 ) , s 2.75 ( 0.085 ) , s 3 ( 0.038 ) , s 3.17 ( 0.017 ) , s 3.33 ( 0.109 ) , s 3.5 ( 0.495 ) , s 3.67 ( 0.142 ) , s 3.83 ( 0.042 ) , s 4 ( 0.040 ) }
c 5 { s 0 ( 0.002 ) , s 1 ( 0.012 ) , s 1.67 ( 0.004 ) , s 2 ( 0.044 ) , s 2.5 ( 0.012 ) , s 2.75 ( 0.227 ) , s 3 ( 0.201 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.022 ) , s 3.5 ( 0.242 ) , s 3.67 ( 0.131 ) , s 3.83 ( 0.070 ) , s 4 ( 0.031 ) }
c 6 { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 2 ( 0.008 ) , s 2.75 ( 0.042 ) , s 3 ( 0.057 ) , s 3.33 ( 0.008 ) , s 3.5 ( 0.371 ) , s 3.67 ( 0.267 ) , s 3.83 ( 0.165 ) , s 4 ( 0.078 ) }
c 7 { s 0 ( 0.003 ) , s 1 ( 0.011 ) , s 1.33 ( 0.001 ) , s 1.67 ( 0.002 ) , s 2 ( 0.031 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.027 ) , s 2.75 ( 0.227 ) , s 3 ( 0.108 ) , s 3.17 ( 0.007 ) , s 3.33 ( 0.040 ) , s 3.5 ( 0.356 ) , s 3.67 ( 0.089 ) , s 3.83 ( 0.059 ) , s 4 ( 0.036 ) }
c 8 { s 0 ( 0.001 ) , s 1 ( 0.004 ) , s 1.67 ( 0.003 ) , s 2 ( 0.016 ) , s 2.5 ( 0.004 ) , s 2.75 ( 0.041 ) , s 3 ( 0.049 ) , s 3.17 ( 0.008 ) , s 3.33 ( 0.057 ) , s 3.5 ( 0.437 ) , s 3.67 ( 0.226 ) , s 3.83 ( 0.094 ) , s 4 ( 0.060 ) }
Table 9. The comprehensive probabilistic linguistic decision matrix (alternative 3).
Table 9. The comprehensive probabilistic linguistic decision matrix (alternative 3).
a 3
c 1 { s 1 ( 0.005 ) , s 1.67 ( 0.002 ) , s 2 ( 0.022 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.022 ) , s 3 ( 0.250 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.031 ) , s 3.5 ( 0.083 ) , s 3.67 ( 0.384 ) , s 3.83 ( 0.133 ) , s 4 ( 0.057 ) }
c 2 { s 1 ( 0.003 ) , s 2 ( 0.018 ) , s 2.5 ( 0.004 ) , s 2.75 ( 0.050 ) , s 3 ( 0.136 ) , s 3.33 ( 0.007 ) , s 3.5 ( 0.196 ) , s 3.67 ( 0.306 ) , s 3.83 ( 0.231 ) , s 4 ( 0.049 ) }
c 3 { s 0 ( 0.002 ) , s 1 ( 0.005 ) , s 2 ( 0.036 ) , s 2.5 ( 0.012 ) , s 2.75 ( 0.026 ) , s 3 ( 0.147 ) , s 3.17 ( 0.012 ) , s 3.33 ( 0.058 ) , s 3.5 ( 0.133 ) , s 3.67 ( 0.347 ) , s 3.83 ( 0.155 ) , s 4 ( 0.065 ) }
c 4 { s 1 ( 0.005 ) , s 2 ( 0.023 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.015 ) , s 2.75 ( 0.051 ) , s 3 ( 0.086 ) , s 3.17 ( 0.019 ) , s 3.33 ( 0.130 ) , s 3.5 ( 0.292 ) , s 3.67 ( 0.246 ) , s 3.83 ( 0.083 ) , s 4 ( 0.047 ) }
c 5 { s 0 ( 0.011 ) , s 1 ( 0.031 ) , s 1.67 ( 0.008 ) , s 2 ( 0.074 ) , s 2.25 ( 0.003 ) , s 2.5 ( 0.033 ) , s 2.75 ( 0.199 ) , s 3 ( 0.217 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.034 ) , s 3.5 ( 0.192 ) , s 3.67 ( 0.118 ) , s 3.83 ( 0.045 ) , s 4 ( 0.032 ) }
c 6 { s 1 ( 0.001 ) , s 1.67 ( 0.002 ) , s 2 ( 0.019 ) , s 2.75 ( 0.025 ) , s 3 ( 0.058 ) , s 3.33 ( 0.022 ) , s 3.5 ( 0.253 ) , s 3.67 ( 0.245 ) , s 3.83 ( 0.271 ) , s 4 ( 0.103 ) }
c 7 { s 0 ( 0.002 ) , s 1 ( 0.015 ) , s 1.67 ( 0.012 ) , s 2 ( 0.069 ) , s 2.25 ( 0.004 ) , s 2.5 ( 0.021 ) , s 2.75 ( 0.124 ) , s 3 ( 0.076 ) , s 3.17 ( 0.014 ) , s 3.33 ( 0.077 ) , s 3.5 ( 0.357 ) , s 3.67 ( 0.121 ) , s 3.83 ( 0.058 ) , s 4 ( 0.050 ) }
c 8 { s 0 ( 0.007 ) , s 1 ( 0.007 ) , s 2 ( 0.024 ) , s 2.5 ( 0.004 ) , s 2.75 ( 0.056 ) , s 3 ( 0.054 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.055 ) , s 3.5 ( 0.396 ) , s 3.67 ( 0.217 ) , s 3.83 ( 0.099 ) , s 4 ( 0.081 ) }
Table 10. The comprehensive probabilistic linguistic decision matrix (alternative 4).
Table 10. The comprehensive probabilistic linguistic decision matrix (alternative 4).
a 4
c 1 { s 1 ( 0.001 ) , s 2 ( 0.012 ) , s 2.5 ( 0.012 ) , s 2.75 ( 0.047 ) , s 3 ( 0.154 ) , s 3.17 ( 0.012 ) , s 3.33 ( 0.053 ) , s 3.5 ( 0.081 ) , s 3.67 ( 0.395 ) , s 3.83 ( 0.142 ) , s 4 ( 0.092 ) }
c 2 { s 2 ( 0.013 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.061 ) , s 3 ( 0.175 ) , s 3.33 ( 0.022 ) , s 3.5 ( 0.215 ) , s 3.67 ( 0.314 ) , s 3.83 ( 0.154 ) , s 4 ( 0.040 ) }
c 3 { s 1 ( 0.005 ) , s 1.67 ( 0.022 ) , s 2 ( 0.062 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.039 ) , s 2.75 ( 0.072 ) , s 3 ( 0.191 ) , s 3.17 ( 0.003 ) , s 3.33 ( 0.048 ) , s 3.5 ( 0.134 ) , s 3.67 ( 0.299 ) , s 3.83 ( 0.094 ) , s 4 ( 0.030 ) }
c 4 { s 0 ( 0.014 ) , s 1 ( 0.004 ) , s 1.67 ( 0.042 ) , s 2 ( 0.087 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.017 ) , s 2.75 ( 0.056 ) , s 3 ( 0.069 ) , s 3.17 ( 0.028 ) , s 3.33 ( 0.154 ) , s 3.5 ( 0.246 ) , s 3.67 ( 0.206 ) , s 3.83 ( 0.044 ) , s 4 ( 0.034 ) }
c 5 { s 1 ( 0.002 ) , s 2 ( 0.078 ) , s 2.5 ( 0.006 ) , s 2.75 ( 0.124 ) , s 3 ( 0.201 ) , s 3.33 ( 0.014 ) , s 3.5 ( 0.222 ) , s 3.67 ( 0.212 ) , s 3.83 ( 0.111 ) , s 4 ( 0.029 ) }
c 6 { s 2 ( 0.005 ) , s 2.75 ( 0.072 ) , s 3 ( 0.118 ) , s 3.33 ( 0.004 ) , s 3.5 ( 0.262 ) , s 3.67 ( 0.267 ) , s 3.83 ( 0.185 ) , s 4 ( 0.086 ) }
c 7 { s 1 ( 0.003 ) , s 1.67 ( 0.013 ) , s 2 ( 0.047 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.019 ) , s 2.75 ( 0.175 ) , s 3 ( 0.144 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.028 ) , s 3.5 ( 0.222 ) , s 3.67 ( 0.186 ) , s 3.83 ( 0.115 ) , s 4 ( 0.044 ) }
c 8 { s 0 ( 0.001 ) , s 1 ( 0.004 ) , s 1.67 ( 0.012 ) , s 2 ( 0.034 ) , s 2.5 ( 0.012 ) , s 2.75 ( 0.098 ) , s 3 ( 0.133 ) , s 3.17 ( 0.011 ) , s 3.33 ( 0.038 ) , s 3.5 ( 0.245 ) , s 3.67 ( 0.283 ) , s 3.83 ( 0.070 ) , s 4 ( 0.059 ) }
Table 11. The comprehensive probabilistic linguistic decision matrix (alternative 5).
Table 11. The comprehensive probabilistic linguistic decision matrix (alternative 5).
a 5
c 1 { s 0 ( 0.002 ) , s 1 ( 0.005 ) , s 1.67 ( 0.005 ) , s 2 ( 0.026 ) , s 2.5 ( 0.006 ) , s 2.75 ( 0.010 ) , s 3 ( 0.194 ) , s 3 ( 0.194 ) , s 3.17 ( 0.015 ) , s 3.33 ( 0.043 ) , s 3.5 ( 0.077 ) , s 3.67 ( 0.420 ) , s 3.83 ( 0.139 ) , s 4 ( 0.058 ) }
c 2 { s 1 ( 0.002 ) , s 2 ( 0.030 ) , s 2.5 ( 0.002 ) , s 2.75 ( 0.025 ) , s 3 ( 0.194 ) , s 3.17 ( 0.015 ) , s 3.33 ( 0.043 ) , s 3.5 ( 0.077 ) , s 3.67 ( 0.420 ) , s 3.83 ( 0.139 ) , s 4 ( 0.058 ) }
c 3 { s 0 ( 0.004 ) , s 1 ( 0.014 ) , s 1.67 ( 0.004 ) , s 2 ( 0.047 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.012 ) , s 2.75 ( 0.050 ) , s 3 ( 0.227 ) , s 3.17 ( 0.006 ) , s 3.33 ( 0.046 ) , s 3.5 ( 0.129 ) , s 3.67 ( 0.314 ) , s 3.83 ( 0.096 ) , s 4 ( 0.049 ) }
c 4 { s 1 ( 0.008 ) , s 1.33 ( 0.002 ) , s 1.67 ( 0.004 ) , s 2 ( 0.020 ) , s 2.25 ( 0.009 ) , s 2.5 ( 0.045 ) , s 2.75 ( 0.096 ) , s 3 ( 0.088 ) , s 3.17 ( 0.028 ) , s 3.33 ( 0.126 ) , s 3.5 ( 0.274 ) , s 3.67 ( 0.211 ) , s 3.83 ( 0.045 ) , s 4 ( 0.045 ) }
c 5 { s 0 ( 0.002 ) , s 1 ( 0.019 ) , s 1.67 ( 0.006 ) , s 2 ( 0.083 ) , s 2.25 ( 0.004 ) , s 2.5 ( 0.032 ) , s 2.75 ( 0.109 ) , s 3 ( 0.239 ) , s 3.17 ( 0.008 ) , s 3.33 ( 0.032 ) , s 3.5 ( 0.196 ) , s 3.67 ( 0.170 ) , s 3.83 ( 0.081 ) , s 4 ( 0.021 ) }
c 6 { s 1 ( 0.004 ) , s 2 ( 0.025 ) , s 2.75 ( 0.092 ) , s 3 ( 0.154 ) , s 3.33 ( 0.004 ) , s 3.5 ( 0.261 ) , s 3.67 ( 0.232 ) , s 3.83 ( 0.160 ) , s 4 ( 0.069 ) }
c 7 { s 0 ( 0.006 ) , s 1 ( 0.037 ) , s 1.33 ( 0.002 ) , s 2 ( 0.058 ) , s 2.25 ( 0.004 ) , s 2.5 ( 0.020 ) , s 2.75 ( 0.097 ) , s 3 ( 0.173 ) , s 3.17 ( 0.009 ) , s 3.33 ( 0.077 ) , s 3.5 ( 0.234 ) , s 3.67 ( 0.154 ) , s 3.83 ( 0.088 ) , s 4 ( 0.042 ) }
c 8 { s 1 ( 0.004 ) , s 2 ( 0.015 ) , s 2.5 ( 0.007 ) , s 2.75 ( 0.044 ) , s 3 ( 0.107 ) , s 3.17 ( 0.004 ) , s 3.33 ( 0.047 ) , s 3.5 ( 0.251 ) , s 3.67 ( 0.319 ) , s 3.83 ( 0.129 ) , s 4 ( 0.073 ) }
Table 12. The comprehensive probabilistic linguistic decision matrix (alternative 6).
Table 12. The comprehensive probabilistic linguistic decision matrix (alternative 6).
a 6
c 1 { s 2 ( 0.007 ) , s 2.5 ( 0.001 ) , s 2.75 ( 0.020 ) , s 3 ( 0.092 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.017 ) , s 3.5 ( 0.086 ) , s 3.67 ( 0.405 ) , s 3.83 ( 0.262 ) , s 4 ( 0.110 ) }
c 2 { s 2 ( 0.005 ) , s 2.75 ( 0.020 ) , s 3 ( 0.103 ) , s 3.33 ( 0.012 ) , s 3.5 ( 0.246 ) , s 3.67 ( 0.278 ) , s 3.83 ( 0.261 ) , s 4 ( 0.075 ) }
c 3 { s 1 ( 0.007 ) , s 2 ( 0.005 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.019 ) , s 2.75 ( 0.081 ) , s 3 ( 0.170 ) , s 3.17 ( 0.005 ) , s 3.33 ( 0.052 ) , s 3.5 ( 0.197 ) , s 3.67 ( 0.265 ) , s 3.83 ( 0.154 ) , s 4 ( 0.045 ) }
c 4 { s 0 ( 0.002 ) , s 1 ( 0.004 ) , s 2 ( 0.013 ) , s 2.25 ( 0.011 ) , s 2.5 ( 0.046 ) , s 2.75 ( 0.076 ) , s 3 ( 0.096 ) , s 3.17 ( 0.030 ) , s 3.33 ( 0.142 ) , s 3.5 ( 0.294 ) , s 3.67 ( 0.177 ) , s 3.83 ( 0.053 ) , s 4 ( 0.055 ) }
c 5 { s 2 ( 0.011 ) , s 2.5 ( 0.003 ) , s 2.75 ( 0.083 ) , s 3 ( 0.104 ) , s 3.33 ( 0.015 ) , s 3.5 ( 0.355 ) , s 3.67 ( 0.238 ) , s 3.83 ( 0.136 ) , s 4 ( 0.054 ) }
c 6 { s 3 ( 0.026 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.013 ) , s 3.5 ( 0.250 ) , s 3.67 ( 0.272 ) , s 3.83 ( 0.329 ) , s 4 ( 0.109 ) }
c 7 { s 2 ( 0.010 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.014 ) , s 2.75 ( 0.110 ) , s 3 ( 0.105 ) , s 3.17 ( 0.007 ) , s 3.33 ( 0.076 ) , s 3.5 ( 0.351 ) , s 3.67 ( 0.149 ) , s 3.83 ( 0.127 ) , s 4 ( 0.049 ) }
c 8 { s 0 ( 0.006 ) , s 1 ( 0.004 ) , s 2 ( 0.008 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.063 ) , s 3 ( 0.041 ) , s 3.33 ( 0.043 ) , s 3.5 ( 0.535 ) , s 3.67 ( 0.178 ) , s 3.83 ( 0.065 ) , s 4 ( 0.050 ) }
Table 13. The comprehensive probabilistic linguistic decision matrix (alternative 7).
Table 13. The comprehensive probabilistic linguistic decision matrix (alternative 7).
a 7
c 1 { s 2 ( 0.008 ) , s 2.5 ( 0.002 ) , s 2.75 ( 0.016 ) , s 3 ( 0.098 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.008 ) , s 3.5 ( 0.119 ) , s 3.67 ( 0.261 ) , s 3.83 ( 0.297 ) , s 4 ( 0.190 ) }
c 2 { s 2 ( 0.008 ) , s 2.75 ( 0.057 ) , s 3 ( 0.076 ) , s 3.33 ( 0.002 ) , s 3.5 ( 0.247 ) , s 3.67 ( 0.291 ) , s 3.83 ( 0.273 ) , s 4 ( 0.045 ) }
c 3 { s 0 ( 0.002 ) , s 1 ( 0.002 ) , s 1.67 ( 0.002 ) , s 2 ( 0.041 ) , s 2.5 ( 0.008 ) , s 2.75 ( 0.081 ) , s 3 ( 0.180 ) , s 3.17 ( 0.004 ) , s 3.33 ( 0.046 ) , s 3.5 ( 0.099 ) , s 3.67 ( 0.374 ) , s 3.83 ( 0.125 ) , s 4 ( 0.036 ) }
c 4 { s 1 ( 0.004 ) , s 1.33 ( 0.025 ) , s 2 ( 0.025 ) , s 2.25 ( 0.008 ) , s 2.5 ( 0.042 ) , s 2.75 ( 0.143 ) , s 3 ( 0.119 ) , s 3.17 ( 0.010 ) , s 3.33 ( 0.051 ) , s 3.5 ( 0.349 ) , s 3.67 ( 0.140 ) , s 3.83 ( 0.046 ) , s 4 ( 0.036 ) }
c 5 { s 0 ( 0.002 ) , s 2 ( 0.010 ) , s 2.5 ( 0.008 ) , s 2.75 ( 0.125 ) , s 3 ( 0.084 ) , s 3.33 ( 0.032 ) , s 3.5 ( 0.214 ) , s 3.67 ( 0.258 ) , s 3.83 ( 0.155 ) , s 4 ( 0.112 ) }
c 6 { s 2 ( 0.006 ) , s 2.5 ( 0.006 ) , s 2.75 ( 0.075 ) , s 3 ( 0.099 ) , s 3.33 ( 0.012 ) , s 3.5 ( 0.382 ) , s 3.67 ( 0.100 ) , s 3.83 ( 0.182 ) , s 4 ( 0.139 ) }
c 7 { s 2 ( 0.008 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.021 ) , s 2.75 ( 0.160 ) , s 3 ( 0.136 ) , s 3.17 ( 0.012 ) , s 3.33 ( 0.026 ) , s 3.5 ( 0.373 ) , s 3.67 ( 0.091 ) , s 3.83 ( 0.157 ) , s 4 ( 0.014 ) }
c 8 { s 1 ( 0.002 ) , s 2 ( 0.031 ) , s 2.5 ( 0.002 ) , s 2.75 ( 0.026 ) , s 3 ( 0.033 ) , s 3.33 ( 0.036 ) , s 3.5 ( 0.541 ) , s 3.67 ( 0.174 ) , s 3.83 ( 0.092 ) , s 4 ( 0.064 ) }
Table 14. The comprehensive probabilistic linguistic decision matrix (alternative 8).
Table 14. The comprehensive probabilistic linguistic decision matrix (alternative 8).
a 8
c 1 { s 1 ( 0.005 ) , s 1.67 ( 0.001 ) , s 2 ( 0.018 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.004 ) , s 2.75 ( 0.063 ) , s 3 ( 0.178 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.024 ) , s 3.5 ( 0.113 ) , s 3.67 ( 0.382 ) , s 3.83 ( 0.180 ) , s 4 ( 0.032 ) }
c 2 { s 1 ( 0.001 ) , s 2 ( 0.002 ) , s 2.75 ( 0.041 ) , s 3 ( 0.026 ) , s 3.33 ( 0.006 ) , s 3.5 ( 0.360 ) , s 3.67 ( 0.303 ) , s 3.83 ( 0.223 ) , s 4 ( 0.037 ) }
c 3 { s 1 ( 0.004 ) , s 1.33 ( 0.001 ) , s 1.67 ( 0.003 ) , s 2 ( 0.005 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.026 ) , s 2.75 ( 0.068 ) , s 3 ( 0.144 ) , s 3.17 ( 0.008 ) , s 3.33 ( 0.040 ) , s 3.5 ( 0.236 ) , s 3.67 ( 0.283 ) , s 3.83 ( 0.127 ) , s 4 ( 0.054 ) }
c 4 { s 0 ( 0.001 ) , s 1 ( 0.004 ) , s 1.67 ( 0.002 ) , s 2 ( 0.026 ) , s 2.25 ( 0.014 ) , s 2.5 ( 0.028 ) , s 2.75 ( 0.105 ) , s 3 ( 0.068 ) , s 3.17 ( 0.015 ) , s 3.33 ( 0.065 ) , s 3.5 ( 0.345 ) , s 3.67 ( 0.239 ) , s 3.83 ( 0.063 ) , s 4 ( 0.024 ) }
c 5 { s 1 ( 0.002 ) , s 2 ( 0.013 ) , s 2.5 ( 0.009 ) , s 2.75 ( 0.169 ) , s 3 ( 0.144 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.038 ) , s 3.5 ( 0.349 ) , s 3.67 ( 0.169 ) , s 3.83 ( 0.085 ) , s 4 ( 0.019 ) }
c 6 { s 1 ( 0.001 ) , s 2 ( 0.001 ) , s 2.75 ( 0.007 ) , s 3 ( 0.026 ) , s 3.33 ( 0.021 ) , s 3.5 ( 0.329 ) , s 3.67 ( 0.295 ) , s 3.83 ( 0.229 ) , s 4 ( 0.092 ) }
c 7 { s 0 ( 0.001 ) , s 1 ( 0.003 ) , s 1.67 ( 0.003 ) , s 2 ( 0.010 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.011 ) , s 2.75 ( 0.107 ) , s 3 ( 0.079 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.064 ) , s 3.5 ( 0.521 ) , s 3.67 ( 0.103 ) , s 3.83 ( 0.061 ) , s 4 ( 0.034 ) }
c 8 { s 0 ( 0.001 ) , s 1 ( 0.002 ) , s 2 ( 0.007 ) , s 2.5 ( 0.004 ) , s 2.75 ( 0.078 ) , s 3 ( 0.041 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.035 ) , s 3.5 ( 0.508 ) , s 3.67 ( 0.211 ) , s 3.83 ( 0.080 ) , s 4 ( 0.032 ) }
Table 15. The comprehensive probabilistic linguistic decision matrix (alternative 9).
Table 15. The comprehensive probabilistic linguistic decision matrix (alternative 9).
a 9
c 1 { s 2.25 ( 0.004 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.001 ) , s 3 ( 0.137 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.003 ) , s 3.5 ( 0.134 ) , s 3.67 ( 0.438 ) , s 3.83 ( 0.234 ) , s 4 ( 0.043 ) }
c 2 { s 2 ( 0.007 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.073 ) , s 3 ( 0.177 ) , s 3.33 ( 0.004 ) , s 3.5 ( 0.270 ) , s 3.67 ( 0.261 ) , s 3.83 ( 0.175 ) , s 4 ( 0.028 ) }
c 3 { s 1 ( 0.042 ) , s 2 ( 0.008 ) , s 2.25 ( 0.002 ) , s 2.5 ( 0.013 ) , s 2.75 ( 0.030 ) , s 3 ( 0.151 ) , s 3.17 ( 0.002 ) , s 3.33 ( 0.054 ) , s 3.5 ( 0.271 ) , s 3.67 ( 0.251 ) , s 3.83 ( 0.143 ) , s 4 ( 0.030 ) }
c 4 { s 0 ( 0.001 ) , s 1 ( 0.001 ) , s 1.67 ( 0.001 ) , s 2 ( 0.009 ) , s 2.25 ( 0.005 ) , s 2.5 ( 0.016 ) , s 2.75 ( 0.074 ) , s 3 ( 0.093 ) , s 3.17 ( 0.008 ) , s 3.33 ( 0.049 ) , s 3.5 ( 0.582 ) , s 3.67 ( 0.099 ) , s 3.83 ( 0.020 ) , s 4 ( 0.043 ) }
c 5 { s 1 ( 0.001 ) , s 2 ( 0.020 ) , s 2.25 ( 0.001 ) , s 2.5 ( 0.008 ) , s 2.75 ( 0.209 ) , s 3 ( 0.236 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.016 ) , s 3.5 ( 0.244 ) , s 3.67 ( 0.161 ) , s 3.83 ( 0.078 ) , s 4 ( 0.024 ) }
c 6 { s 1 ( 0.001 ) , s 2 ( 0.005 ) , s 2.5 ( 0.001 ) , s 2.75 ( 0.072 ) , s 3 ( 0.062 ) , s 3.33 ( 0.007 ) , s 3.5 ( 0.357 ) , s 3.67 ( 0.177 ) , s 3.83 ( 0.219 ) , s 4 ( 0.099 ) }
c 7 { s 1 ( 0.001 ) , s 1.67 ( 0.001 ) , s 2 ( 0.071 ) , s 2.5 ( 0.024 ) , s 2.75 ( 0.272 ) , s 3 ( 0.216 ) , s 3.33 ( 0.010 ) , s 3.5 ( 0.272 ) , s 3.67 ( 0.083 ) , s 3.83 ( 0.038 ) , s 4 ( 0.011 ) }
c 8 { s 1 ( 0.001 ) , s 2 ( 0.007 ) , s 2.5 ( 0.005 ) , s 2.75 ( 0.147 ) , s 3 ( 0.087 ) , s 3.17 ( 0.001 ) , s 3.33 ( 0.016 ) , s 3.5 ( 0.499 ) , s 3.67 ( 0.147 ) , s 3.83 ( 0.053 ) , s 4 ( 0.036 ) }
Table 16. The richness and dissimilarity of each attribute.
Table 16. The richness and dissimilarity of each attribute.
c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8
CNEV R i s r 0.1290.1100.1310.1110.1870.0860.1560.091
R i t r 0.0780.1130.1280.1420.1230.1560.1280.132
D i s r 0.1120.0980.1000.1000.2160.1210.1320.122
D i t r 0.1810.1300.1170.1160.0990.1280.1300.100
MLNEV R i s r 0.1080.0870.1470.1700.1610.0620.1520.112
R i t r 0.1230.1330.1430.1350.1280.1630.1120.063
D i s r 0.1110.1110.1070.1560.1310.1350.1430.107
D i t r 0.1800.1240.1210.1080.1020.1300.1090.127
Table 17. The attribute weights for compact and medium–large new energy vehicles.
Table 17. The attribute weights for compact and medium–large new energy vehicles.
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8
CNEV0.1260.1140.1200.1190.1470.1270.1350.112
MLNEV0.1350.1170.1300.1380.1270.1270.1250.101
Table 18. Prioritizations between the attributes.
Table 18. Prioritizations between the attributes.
Vehicle TypePrioritization between Attribute
CNEV c 5 c 7 c 6 c 1 c 3 c 4 c 2 c 8
MLNEV c 4 c 1 c 3 c 5 c 6 c 7 c 2 c 8
Table 19. The ranking results.
Table 19. The ranking results.
CNEVMLNEV
a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9
Nearness degree0.8610.5380.4760.4520.3860.874 0.624 0.592 0.332
TOPSIS method a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9
Table 20. The ranking results for different consumer demands.
Table 20. The ranking results for different consumer demands.
CNEVMLNEV
Family-oriented consumers a 2 a 5 a 1 a 3 a 4 a 8 a 6 a 7 a 9
Appearance-oriented consumers a 1 a 2 a 4 a 3 a 5 a 6 a 7 a 9 a 8
Professional-oriented consumers a 1 a 4 a 3 a 5 a 2 a 6 a 7 a 8 a 9
Experiential-oriented consumers- a 6 a 7 a 8 a 9
Table 21. The results based on score function method.
Table 21. The results based on score function method.
CNEVMLNEV
a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9
Weighted score value3.4673.3793.3733.3473.3393.502 3.456 3.453 3.384
Score function [21] a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9
Our proposed method a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9
Table 22. The attribute weights based on the TF-IDF values.
Table 22. The attribute weights based on the TF-IDF values.
c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8
CNEVTF-IDF0.1530.0880.0670.0290.0230.0610.0470.024
Weight0.3110.1790.1360.0590.0470.1250.0960.048
MLNEVTF-IDF0.1360.0700.0750.0200.0300.0560.0420.014
Weight0.307 0.158 0.170 0.046 0.067 0.126 0.095 0.031
Table 23. Ranking results based on different methods.
Table 23. Ranking results based on different methods.
No.MethodsRanking Results (CNEV)Ranking Results (MLNEV)Best Alternative
1TR-TOPSIS [36] a 5 a 4 a 3 a 1 a 2 a 6 a 9 a 8 a 7 a 5 , a 6
2TR-DSET [71] a 4 a 5 a 2 a 1 a 3 a 9 a 8 a 6 a 7 a 4 , a 9
3TR-TODIM [32] a 5 a 1 a 3 a 4 a 2 a 6 a 7 a 8 a 9 a 5 , a 6
4SR-TOPSIS [36] a 1 a 2 a 3 a 4 a 5 a 6 a 8 a 7 a 9 a 1 , a 6
5SR-DSET [71] a 1 a 2 a 3 a 5 a 4 a 6 a 8 a 7 a 9 a 1 , a 6
6Our proposed method a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 1 , a 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, S.; Zhang, X.; Du, Z.; Chen, Y. A New Multi-Attribute Decision Making Method for Overvalued Star Ratings Adjustment and Its Application in New Energy Vehicle Selection. Mathematics 2023, 11, 2037. https://doi.org/10.3390/math11092037

AMA Style

Yu S, Zhang X, Du Z, Chen Y. A New Multi-Attribute Decision Making Method for Overvalued Star Ratings Adjustment and Its Application in New Energy Vehicle Selection. Mathematics. 2023; 11(9):2037. https://doi.org/10.3390/math11092037

Chicago/Turabian Style

Yu, Sumin, Xiaoting Zhang, Zhijiao Du, and Yanyan Chen. 2023. "A New Multi-Attribute Decision Making Method for Overvalued Star Ratings Adjustment and Its Application in New Energy Vehicle Selection" Mathematics 11, no. 9: 2037. https://doi.org/10.3390/math11092037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop