Next Article in Journal
Hybrid Rule-Based Classification and Defect Detection System Using Insert Steel Multi-3D Matching
Previous Article in Journal
ObsBattery: Position-Aware Federated Learning with Dueling DQN Clustering and Training Adaptation for Satellite Battery Prediction
Previous Article in Special Issue
Degradation of Multi-Task Prompting Across Six NLP Tasks and LLM Families
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach Based on Granular Computing and 2-Tuple Linguistic Model to Personalize Linguistic Information in Group Decision-Making

by
Aylin Estrada-Velazco
1,2,
Yeleny Zulueta-Véliz
2,
José Ramón Trillo
3 and
Francisco Javier Cabrerizo
1,*
1
Andalusian Research Institute in Data Science and Computational Intelligence, Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain
2
School of Free Technologies, University of Informatics Sciences, Havana 19370, Cuba
3
Department of Computer Science and Systems Engineering, University of Zaragoza, 50018 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(23), 4698; https://doi.org/10.3390/electronics14234698 (registering DOI)
Submission received: 27 October 2025 / Revised: 25 November 2025 / Accepted: 26 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)

Abstract

Group decision-making is an inherently collaborative process that can become increasingly complex when addressing the uncertainty associated with linguistic assessments from experts. A crucial principle for achieving a solution of superior quality lies in the acknowledgment that the same word may bear divergent meanings among different experts. Regrettably, a significant number of existing methodologies for computing with words presuppose a uniformity of meaning for linguistic assessments across all participating individuals. In response to this limitation, we propose an innovative methodology based on the 2-tuple linguistic model in conjunction with the granular computing paradigm. Given that the individual interpretations of words, when articulating preferences, are closely linked to the consistency of each expert, our proposal places particular emphasis on the modification of the symbolic translation of the 2-tuple linguistic value with the overarching objective of maximizing the consistency of their assessments. This adjustment is implemented while preserving the original linguistic preferences communicated by the experts. We address a real-world building refurbishment problem and conduct a comparative analysis to demonstrate the effectiveness of the proposal. Focusing on consistency enhances group decision-making processes and outcomes, ensuring both accuracy and alignment with individual interpretations and preferences.

1. Introduction

The decision-making process is inherently complex and, when it involves multiple experts providing assessments of various alternatives to address a problem in order to arrive at a solution (or a series of solutions), it is referred to as group decision-making (GDM) [1]. This process requires the aggregation of opinions within a dynamic environment influenced by the behaviors of experts, who often possess differing levels of knowledge regarding the alternatives being evaluated. Furthermore, these experts may approach the analysis from distinct perspectives, shaped by their individual experiences or expectations. Consequently, it is crucial to effectively manage the conflicts between differing opinions to ensure that all experts feel adequately represented in the ultimate decision [2]. In this context, two fundamental challenges arise: the modeling of the judgments rendered by the experts and the evaluation of the quality of those judgments [3].
To model evaluations in decision-making, various frameworks have been proposed [4]. One such framework utilizes an ordered vector of alternatives, wherein the degree to which each alternative addresses a problem is assessed through a utility value. Alternatively, pairwise comparison may be employed to determine the preference degree of one alternative in relation to another. When applied iteratively, this method generates a preference relation, which is the most prevalent framework in decision-making due to its ability to facilitate accurate evaluations and streamline the aggregation of individual assessments into group evaluations—an aspect that is particularly critical in GDM scenarios [5]. Nevertheless, this framework’s propensity to yield more information than necessary can lead to inconsistent evaluations. Such inconsistencies may stem from insufficient current knowledge and the intrinsic complexities associated with the GDM process. Because consistency has been recognized as a measure of rationality that enables performance evaluations, the quality of a preference relation can be gauged by its degree of consistency. Consequently, to ensure the validity of the outcomes, it is essential to maintain consistent preference relations [6].
Within the realm of preference relations, one can identify fuzzy preference relations and multiplicative preference relations, both of which rely on numerical values [7,8]. Nevertheless, as decision-making is a fundamentally cognitive process, methodologies based on computing with words effectively bridge the gap between computational techniques and human reasoning [9,10]. These approaches have demonstrated considerable promise in enriching and formulating decision-making models that effectively manage qualitative information [11]; for example, models of computing with words utilizing qualitative scales have been extensively applied to the management of linguistic information [3]. Recently, novel models based on the granular computing framework have emerged to effectively address linguistic expressions [12,13,14,15,16]. Regardless of the specific model of computing with words, it is evident that linguistic information must be operationalized, as it does not possess inherent operational capability. In the context of models grounded in the granular computing framework, this operationalization is achieved through a process known as information granulation [17]. In this framework, both the distribution and semantics of linguistic values are not predetermined; rather, they are established through an optimization process and, thus, are somewhat more feasible. This process entails the optimization of a suitable criterion through the effective mapping of linguistic values to a designated information granule family. Specifically, the distribution and semantics of linguistic values are typically modeled through a series of continuous intervals (information granules) that are established by assigning a vector of cutoff points within [ 0 , 1 ] . The process of allocating these cutoff points is usually determined as an optimization problem, where consistency and consensus are commonly employed as key performance indicators [12,13,14,15,16,18,19].
The common characteristics of proposals based on the granular computing paradigm include the formulation of operational versions of linguistic terms in a manner that maintains consistent intervals across all experts—essentially assuming the same semantics for everyone involved. However, it is imperative to acknowledge that, in GDM contexts, words may possess distinct meanings for different individuals [20,21]. This consideration holds particular significance in environments such as social networks, where a diverse array of users interact and the same term can be interpreted in multiple ways [22]. Traditional methodologies for computing with words frequently neglect this critical issue, either by avoiding direct engagement with it or by employing multi-granular linguistic models and type-2 fuzzy sets as remedial measures [23,24]. While these approaches may alleviate some of the ambiguity associated with word meanings, they do not sufficiently capture the unique semantics pertinent to each individual participant. To overcome this limitation, personalized individual semantics approaches were introduced, which utilize the 2-tuple linguistic model [25] alongside numerical scales [26,27] to significantly enhance the specificity and relevance of interpretations within GDM frameworks [28,29,30].
This study addresses a major gap in existing granular-computing-based linguistic decision-making models; namely, their assumption of uniform semantics across experts. Such an assumption disregards the fact that linguistic terms often carry different meanings for different individuals, which limits the accuracy and fidelity of collective decisions. To overcome this limitation, we propose a novel approach that personalizes linguistic semantics by integrating granular computing with the 2-tuple linguistic model. The central contribution of this work lies in enabling expert-specific interpretations of linguistic terms within a formal computational structure, thus enhancing both the coherence and representativeness of group decisions.
Therefore, the main research goal of this study is to develop a novel group decision-making model that personalizes the semantics of linguistic terms by integrating the granular computing paradigm with the 2-tuple linguistic model. Specifically, our objective is to operationalize individual linguistic preferences through personalized information granules, generated by optimizing the consistency of each expert’s 2-tuple linguistic preference relation. In this way, we aim to overcome the limitations of existing granular computing approaches that assume uniform linguistic semantics among experts, ultimately providing a more accurate, interpretable, and expert-aligned framework for linguistic GDM.
The proposed method is evaluated through a dual strategy that combines quantitative and comparative analysis. First, we measure the improvement in experts’ consistency levels after the optimization of symbolic translations, thereby providing a numerical indicator of the method’s effectiveness in enhancing the rationality and coherence of linguistic preference relations. Second, we conduct a comparative study against representative granular-computing-based linguistic decision-making approaches, demonstrating that our model achieves superior consistency and produces collective outcomes that more faithfully reflect individual semantic interpretations. Together, these two complementary perspectives confirm the robustness and practical relevance of the proposed approach.
The rest of this manuscript is structured as follows. Section 2 provides essential foundational knowledge necessary for comprehending our proposal, including the description of the 2-tuple linguistic model, the definition of a GDM problem in 2-tuple linguistic contexts, a methodology for determining individual consistency within a 2-tuple linguistic preference relation, and an exposition of the Particle Swarm Optimization (PSO) algorithm. Section 3 elaborates on our innovative GDM approach, which employs granular computing and the 2-tuple linguistic model to personalize linguistic information. Section 4 presents a real-world case study that exemplifies the proposed GDM model, accompanied by a comparative analysis to emphasize its key features in Section 5. Lastly, Section 6 concludes with a summary of our findings and recommends potential directions for future research.

2. Preliminaries

The fundamental concepts and methods underlying our proposal are introduced in this section.

2.1. The 2-Tuple Linguistic Model

The 2-tuple linguistic model was proposed by F. Herrera and L. Martínez, and is one of the most widely used symbolic models for computing with words to address the challenge of minimizing information loss during computations involving linguistic values [25,31]. It was designed to enhance accuracy and streamline the processes associated with computing using natural language. This model approaches the linguistic domain as a continuous spectrum while preserving the foundational elements of linguistics, including syntax and semantics. To do so, it expands the application of indices and modifies the fuzzy linguistic approach by introducing a new parameter referred to as symbolic translation. Consequently, the linguistic information is represented through a linguistic 2-tuple, which comprises a pair of values, ( s i , α ) S ¯ S × [ 0.5 , 0.5 ) , where s S = { s 0 , s 1 , , s g } is a linguistic value and α [ 0.5 , 0.5 ) is a numerical value that represents the symbolic translation supporting the difference of information between a counting of information β evaluated in the interval of granularity [ 0 , g ] of the S and the closest value in { 0 , 1 , , g } indicating the index of the closest linguistic value in S.
The 2-tuple linguistic model introduces two transformation functions, Δ and Δ 1 , to facilitate computing with words without information loss. These functions allow conversions between numerical values and linguistic 2-tuples, and vice versa [25].
Definition 1.
The Δ function transforms a numerical value β [ 0 , g ] in its equivalent linguistic 2-tuple ( s i , α ) S × [ 0.5 , 0.5 ) . It is defined as:
Δ ( β ) = s i , i = round ( β ) α = β i , α [ 0.5 , 0.5 )
where round is the rounding operator, s i is the linguistic label with the closest index to β and α is the value of the symbolic translation. If α = 0 , it means that β coincides perfectly with the index of a linguistic term.
Definition 2.
The Δ 1 function performs the inverse transformation, converting a linguistic 2-tuple ( s i , α ) into its equivalent numerical value β [ 0 , g ] . It is defined as:
Δ 1 ( s i , α ) = i + α = β
The conversion of a linguistic value into a linguistic 2-tuple consists in adding a value 0 as a symbolic translation: s i S ( s i , 0 ) S ¯ . Additionally, comparison operators, negation operators, and aggregation operators for linguistic 2-tuples have been defined [25].

2.2. Group Decision-Making in 2-Tuple Linguistic Contexts

Let e h E and x i X be the expert and the alternative, respectively, with h = 1 , 2 , , m and i = 1 , 2 , , n . In a 2-tuple linguistic context, the objective is to assign each alternative a linguistic 2-tuple representing the collective opinion of the experts [25,32,33]. In this study, we focus on 2-tuple linguistic preference relations as the way to represent experts’ judgments [34,35].
Definition 3
([32]). “A 2-tuple linguistic preference relation P ¯ on a collection of alternatives X is a collection of 2-tuples on the product set X × X that is characterized by a membership function μ P ¯ : X × X S × [ 0.5 , 0.5 ) .”
In practice, an expert associates each pair of alternatives, x i and x j , with a value that reflects a certain degree of preference for alternative x i over x j using a linguistic 2-tuple p ¯ i j to obtain a matrix P ¯ = [ p ¯ i j ] n × n , where p ¯ i j = μ P ¯ ( x i , x j )   i , j { 1 , 2 , , n } and p ¯ i j S × [ 0.5 , 0.5 ) . Once all the experts have expressed their preferences, this information is then used to construct a collective 2-tuple linguistic preference relation that incorporates the evaluations of the pairwise comparisons made by all the experts.
In consequence, to solve a GDM problem, a selection process consisting in two phases is usually carried out [32]:
  • Phase of aggregation. In this phase, the individual preferences expressed by experts, conveyed through 2-tuple linguistic preference relations, are systematically combined to form a collective preference that reflects the group’s overall stance. This process utilizes various aggregation operators, such as weighted averages or priority ranking techniques, to efficiently synthesize the diverse preferences into a cohesive whole. By addressing the nuances of each expert’s input, this phase aims to ensure that the resulting collective preference accurately embodies the group’s values and priorities.
  • Phase of exploitation. Following aggregation, the collective 2-tuple linguistic preference relation serves as a foundation for evaluating the available alternatives. In this phase, a systematic approach is employed to rank these alternatives based on their alignment with the collective 2-tuple linguistic preference relation. Typically, a selection function is applied, which assigns a score or degree of selection to each alternative. This scoring reflects how well each alternative satisfies the group’s preferences, ultimately guiding the decision-making process towards identifying the most appropriate choice or choices that best meet the group’s objectives.

2.3. Consistency

In the GDM process, it is vital to effectively manage consistency to enhance the overall quality and acceptance of the final solution. Experts frequently offer linguistic assessments that, when translated into preference relations, may exhibit inconsistencies. This inconsistency can lead to a situation where the information provided by an expert is not only incoherent but may also contain conflicting viewpoints [6]. Such contradictions can undermine the credibility of the assessments and hinder the decision-making process. Therefore, establishing a framework for assessing and addressing these inconsistencies is essential for ensuring that the collective decision is both coherent and broadly supported by the group members.
The concept of consistency in expert evaluations involves two distinct but interconnected aspects [36]: (i) The consistency of individual experts and (ii) the consistency of collective judgments within a group of experts. This study concentrates specifically on the first aspect, defining an expert as consistent if their pairwise comparisons fulfill the requirements of the additive transitivity property [36]. A mathematical formulation of additive transitivity for fuzzy preference relations can be found in Reference [37]:
( r i j 0.5 ) + ( r j k 0.5 ) = ( r i k 0.5 ) , i , j , k { 1 , , n } , r i j , r j k , r i k [ 0 , 1 ]
To redefine the linguistic additive transitivity property in 2-tuple linguistic preference relations, the transformation functions Δ and Δ 1 are employed [32]:
Δ [ ( Δ 1 ( p ¯ i j ) Δ 1 ( s g / 2 , 0 ) ) + ( Δ 1 ( p ¯ j k ) Δ 1 ( s g / 2 , 0 ) ) ] = = Δ [ ( Δ 1 ( p ¯ i k ) Δ 1 ( s g / 2 , 0 ) ] , i , j , k { 1 , , n }
The linguistic additive transitivity implies the linguistic additive reciprocity; therefore, as p ¯ i i = ( s g / 2 , 0 ) i , if we set k = i in Equation (4), then we have Δ ( Δ 1 ( p ¯ i j ) + Δ 1 ( p ¯ j i ) ) = ( s g , 0 ) , i , j { 1 , , n } . As a result, Equation (4) was reformulated as follows [32]:
p ¯ i k = Δ ( Δ 1 ( p ¯ i j ) + Δ 1 ( p ¯ j k ) Δ 1 ( s g / 2 , 0 ) ) , i , j , k { 1 , , n }
From now on, a 2-tuple linguistic preference relation will be considered consistent when it satisfies the additive consistency property; i.e., for every three alternatives of the GDM problem, x i , x j , x k X , their associated 2-tuple linguistic preference degrees p ¯ i j , p ¯ j k , p ¯ i k must satisfy Equation (5).
Equation (5) enables us to compute an estimated 2-tuple linguistic preference degree by leveraging existing 2-tuple linguistic preference degrees. Introducing an intermediate alternative x j , the following estimate for p ¯ i k ( i k ) can be derived in different ways [32]:
(1)
Through p ¯ i k = Δ ( Δ 1 ( p ¯ i j ) + Δ 1 ( p ¯ j k ) Δ 1 ( s g / 2 , 0 ) ) , we get the estimated linguistic 2-tuple:
( c p ¯ i k ) j 1 = Δ ( Δ 1 ( p ¯ i j ) + Δ 1 ( p ¯ j k ) Δ 1 ( s g / 2 , 0 ) )
(2)
Through p ¯ j k = Δ ( Δ 1 ( p ¯ j i ) + Δ 1 ( p ¯ i k ) Δ 1 ( s g / 2 , 0 ) ) , we get the estimated linguistic 2-tuple:
( c p ¯ i k ) j 2 = Δ ( Δ 1 ( p ¯ j k ) Δ 1 ( p ¯ j i ) + Δ 1 ( s g / 2 , 0 ) )
(3)
Through p ¯ i j = Δ ( Δ 1 ( p ¯ i k ) + Δ 1 ( p ¯ k j ) Δ 1 ( s g / 2 , 0 ) ) , we get the estimated linguistic 2-tuple:
( c p ¯ i k ) j 3 = Δ ( Δ 1 ( p ¯ i j ) Δ 1 ( p ¯ k j ) + Δ 1 ( s g / 2 , 0 ) )
The overall estimated linguistic 2-tuple c p ¯ i k of p ¯ i k is calculated as the mean of all possible ( c p ¯ i k ) j 1 , ( c p ¯ i k ) j 2 and ( c p ¯ i k ) j 3 , 2-tuple linguistic values:
c p ¯ i k = Δ j = 1 ; i k j n ( Δ 1 ( ( c p ¯ i k ) j 1 ) + Δ 1 ( ( c p ¯ i k ) j 2 ) + Δ 1 ( ( c p ¯ i k ) j 3 ) ) 3 ( n 2 )
It is essential to recognized that in Equations (6)–(8), the argument of the function Δ can take values outside the interval [ 0 , g ] . To address this issue, function h is employed to adjust the arguments of Δ :
h ( y ) = 0 , if y < 0 g , if y > g y , otherwise
When the information expressed is completely consistent, then ( c p ¯ i k ) j l = p ¯ i k , j , l . In consequence, the error between a 2-tuple linguistic preference degree and its estimated one in [ 0 , 1 ] can be calculated as follows [32]:
ε i k = | Δ 1 ( c p ¯ i k ) Δ 1 ( p ¯ i k ) | g
This value is then used to define the consistency level of the 2-tuple linguistic preference degree p ¯ i k as follows [32]:
c l i k = 1 ε i k
When c l i k = 1 , it follows that ε i k = 0 , which signifies an absence of inconsistency. Conversely, a lower value of c l i k corresponds to a higher value of ε i k , resulting in an increased level of inconsistency of p ¯ i k with respect to the other available information.
Finally, the consistency level, c l [ 0 , 1 ] , of a 2-tuple linguistic preference relation P ¯ is obtained as follows [32]:
c l = i = 1 n c l i n
where c l i [ 0 , 1 ] is the consistency level related to a particular alternative x i of P ¯ that is computed as:
c l i = k = 1 ; i k n ( c l i k + c l k i ) 2 ( n 1 )

2.4. Particle Swarm Optimization

It is a sophisticated population-based stochastic optimization technique that draws inspiration from the collective behaviors observed in flocks of birds and schools of fish. PSO effectively tackles complex optimization problems by simulating how these natural systems communicate and explore their environment [38]. In each iteration of the algorithm, PSO assesses a diverse population of candidate solutions, known as particles, each represented by a position vector x i = ( x i , 1 , , x i , d ) in a d-dimensional search space. These particles navigate the search space in pursuit of optimal solutions, utilizing a defined quality measure known as the fitness function, denoted as f. The fitness function evaluates how well each candidate solution meets the desired objective of the optimization problem. The movement of each particle is governed by its velocity vector v i = ( v i , 1 , , v i , d ) , which is recalculated in every iteration. The velocity is influenced by two critical components: the best position that the individual particle has encountered so far, represented by b i = ( b i , 1 , , b i , d ) , and the best position discovered by any particle within the swarm, referred to as the global best position g = ( g 1 , , g d ) . This process involves mathematical equations that combine these factors with various coefficients to adjust the particles’ velocities and positions. As the algorithm progresses, each particle is incrementally guided toward more promising regions of the search space, thereby enhancing the swarm’s collective ability to locate the optimal solution. The ultimate goal of PSO is to converge the particles towards the best solution, thereby effectively optimizing the target objective while balancing exploration of the search space and exploitation of the known best solutions. This dynamic interaction among particles facilitates a robust search mechanism, leading to high-quality solutions across a variety of complex optimization challenges.
While various adaptations of this algorithm may be applicable [39], we have chosen to utilize the standard version originally introduced by Kennedy and Eberhart [38]. The specific steps and procedures are detailed in Algorithm 1. Within this framework, the coefficients c 1 and c 2 function as acceleration factors, significantly influencing the particle’s trajectory toward both its personal best position and the global best position found among all particles in the swarm. Moreover, the parameter ω represents the inertia weight, a crucial component that balances the exploration and exploitation capabilities of the algorithm. A lower inertia weight ( ω ) encourages exploitation, enhancing the algorithm’s ability to perform localized searches around promising areas in the solution space. This shift promotes convergence to optimal solutions but can risk premature convergence if not managed effectively. To optimize the search process, it is customary to decrement the value of ω gradually as the number of iterations increases. In this way, the algorithm can conduct a comprehensive global search during its initial phases, allowing it to explore a wider area of the solution space. As the algorithm progresses, the reduction in ω facilitates a transition toward a more concentrated and refined local search, ultimately improving the algorithm’s efficiency and effectiveness in identifying optimal or near-optimal solutions. Consequently, the adjustment of ω is typically calibrated according to:
ω ( e ) = ( ω ( 1 ) ω ( m ) ) m e m + ω ( m )
In (15), ω ( 1 ) , ω ( e ) and ω ( m ) denote the initial, current and final values assigned to ω , respectively. Additionally, the variables m and e denote the maximum number of iterations and the current iteration. PSO was selected for this optimization task because the symbolic translations to be determined constitute a continuous set of real-valued parameters, and the corresponding consistency-based fitness function is non-convex and does not provide analytical gradient information. PSO is well suited to such conditions due to its population-based search strategy, its ability to efficiently explore continuous solution spaces, and its proven robustness in similar optimization problems involving linguistic modeling and granular computing. Moreover, PSO has been widely reported to converge rapidly when appropriate inertia-weight control is applied, offering an effective balance between exploration and exploitation. For these reasons, PSO represents an efficient and reliable optimization tool for the personalization of linguistic information in our framework.
Algorithm 1 PSO.
1:
for each particle i = 1 , , n  do
2:
   Initializing its initial position x i ( 0 ) within the lower and upper limits, l inf and l sup , of the search space with a uniformly distributed random vector: x i ( 0 ) U ( l inf , l sup )
3:
   Initializing b i with the initial position: b i x i ( 0 )
4:
   if ( f ( b i ) > f ( g ) ) then
5:
      Updating g with the particle’s best position: g b i
6:
   end if
7:
   Initializing its velocity: v i ( 0 ) U ( | l sup l inf | , | l sup l inf | )
8:
end for
9:
for each iteration e = 1 , , m  do
10:
   for each particle i = 1 , , n  do
11:
     for each dimension h = 1 , , d  do
12:
        Generating two random numbers: r h , s h U ( 0 , 1 )
13:
        Updating its velocity: v i , h ( e ) ω ( e ) v i , h ( e 1 ) + c 1 r h ( b i , h x i , h ( e 1 ) ) + c 2 s h ( g h x i , h ( e 1 ) )
14:
     end for
15:
     Updating its position: x i ( e ) x i ( e 1 ) + v i ( e )
16:
     if ( f ( x i ( e ) ) > f ( b i ) ) then
17:
        Updating its best position: b i x i ( e )
18:
        if ( f ( b i ) > f ( g ) ) then
19:
          Updating the global best position: g b i
20:
        end if
21:
     end if
22:
   end for
23:
end forreturn g

3. Methodology: A Group Decision-Making Approach Based on Personalized Linguistic Information

In this section, we present a novel methodology for modeling and facilitating processes of GDM within linguistic contexts. Leveraging a framework of granular computing, we personalize the linguistic values derived from the preference degrees articulated by experts. Before undertaking the aggregation and exploitation phases of the selection process, it is essential to transform the linguistic values from these preference relations into an operational format through a structured granulation process. This transformation is pivotal, as it converts subjective linguistic inputs into quantifiable data suitable for further analysis. The result of this granulation process is an operational framework that enables the effective processing of data, ultimately yielding a comprehensive ranking of the available alternatives. This ranking is in alignment with the linguistic values communicated by the experts, ensuring that their insights are accurately represented and integrated into the decision-making process.
The precise definition of linguistic values is closely associated with the structured development of a collection of information granules. In this study, we advocate for an innovative approach that departs from previously explored methodologies by articulating the granulation of linguistic information through the framework of the 2-tuple linguistic model instead of intervals [12,13,14,15,16,18,19]. This process encompasses four significant characteristics that enhance its effectiveness:
  • Retention of semantic meaning. The process meticulously preserves the meanings of the linguistic values that are distributed throughout the granulation phase. This ensures that the core ideas and concepts conveyed by the language remain intact, facilitating a deeper understanding among users.
  • Representation of information based on the 2-tuple linguistic model. Each granule of information is represented as a pair consisting of a linguistic term and a corresponding numerical value. The linguistic term reflects the qualitative aspect of the information, while the numerical value quantifies the term’s significance. This dual-format representation not only enhances the precision and clarity of the conveyed linguistic information but also enables more effective communication and thorough analysis in a variety of applications, such as decision-making, data interpretation, and modeling.
  • Recognition of subjective meaning. The process recognizes the subjective nature of language by acknowledging that the same words may convey different meanings to different individuals. It entails a distinct treatment of the linguistic values supplied by the experts involved in the process, allowing for a more nuanced understanding of varying perspectives and interpretations.
  • Operationalization of linguistic values. The process operationalizes the linguistic values by modeling them as linguistic 2-tuples. This is achieved by formulating a specific optimization problem aimed at maximizing consistency concerning the 2-tuple linguistic preference relations. This step is crucial as it provides a structured approach to handling linguistic information, ensuring that the final outcomes align closely with the diverse preferences and judgments of the experts.
It is worth highlighting that the methodological process described in this section is fully implemented throughout the development of the proposed approach. Each phase of the Design Science Research methodology is explicitly addressed and executed in the subsequent sections, ensuring that the complete Design Science Research cycle is rigorously fulfilled within our model. In the following subsections, we provide a detailed examination of the four distinct phases involved in the proposed GDM approach. These phases are: (i) articulation of assessments, where experts clearly express their evaluations and opinions regarding the decision at hand; (ii) granulation of linguistic information, which involves breaking down and categorizing the qualitative data gathered from the assessments to capture the nuances of experts’ insights; (iii) aggregation, where the individual assessments and granulated information are systematically combined to form a cohesive group perspective; and (iv) exploitation, which focuses on utilizing the aggregated information to inform decision-making and action, ensuring that the outcomes align with the group’s collective goals and objectives. Each phase plays a critical role in enhancing the overall effectiveness of the GDM process. To improve the comprehensibility of the proposed approach, a graphical representation of the method has been incorporated (see Figure 1). This diagram summarizes the full workflow, including the articulation of experts’ assessments, the granulation of linguistic information through personalized symbolic translations, the aggregation of 2-tuple linguistic preference relations, and the final exploitation phase. The graphical overview enables readers to visualize the sequence, interdependence, and purpose of each stage, thereby enhancing the interpretability and accessibility of the model.

3.1. Articulation of Assessments

The first phase is dedicated to the systematic collection of assessments from a panel of experts, using linguistic pairwise comparisons to articulate their preferences. In this context, we consider a group consisting of m experts, denoted as e h , where the index h varies from 1 to m. Each expert is tasked with evaluating a set of n alternatives, represented as x i , where the index i ranges from 1 to n. In addition, we utilize a comprehensive set of linguistic values, labeled as s l , with l ranging from 0 to g, to facilitate nuanced comparisons. A linear order ≺ between the linguistic values is assumed, where s i , s j , if s i s j ( j > i ) , then s j determines a greater degree of preference than s i . During this phase, each expert e h constructs a linguistic preference relation, designated as P h . This relation comprises their pairwise comparisons between the alternatives, expressed through the predefined linguistic values s l . These comparisons allow for a qualitative assessment of the alternatives based on the experts’ subjective judgments, which can capture the nuances of their preferences more effectively than numerical ratings alone [3]. This approach not only enhances the clarity of the decision-making process, but also ensures that the collective insights of the experts are integrated cohesively.

3.2. Granulation of Linguistic Information

Prior to commencing the selection process (aggregation and exploitation), it is imperative to operationalize the linguistic values derived from the linguistic preference relations, and this is the objective of this second phase. Linguistic values utilized in the pairwise comparisons offered by the experts, in their inherent nature, are non-operational, indicating that further processing or analysis cannot be performed on them in their current state. Consequently, these linguistic values necessitate a procedure known as granulation, which involves the transformation of these values into structured entities referred to as information granules [40]. This process facilitates enhanced understanding and manipulation of the associated data.
The granular definition of linguistic values is fundamentally linked to the realization of a structured set of information granules. This study investigates the granulation of linguistic information through the framework of the 2-tuple linguistic model [25], which facilitates a more formal representation of linguistic nuances, suggesting that information granules are conveyed as linguistic 2-tuples. By employing this model, this study aims to improve clarity and effectiveness in communication, particularly in contexts characterized by imprecise or fuzzy information. Moreover, consistency is fundamental in formalizing linguistic values into linguistic 2-tuples because when consistency levels are elevated, the quality of solutions derived from decision-making processes tends to improve significantly. Additionally, the concept of consistency facilitates the personalization of individual semantics. Experts assign their unique interpretation and meaning to linguistic values based on their preferences, which is inherently linked to their level of consistency. This individual semantic framework reflects how personal experiences and understanding shape the assessment of linguistic values. Given these critical insights, this study also leverages consistency as an essential criterion for optimization. By focusing on consistency, we aim to enhance GDM processes and outcomes, ensuring that they are not only accurate but also aligned with individual preferences and interpretations.
In the context of optimizing the criterion, various strategies may be employed to enhance the process. Nevertheless, PSO has consistently demonstrated its efficacy in addressing such optimization challenges [12,13], making it the preferred method for this task. This algorithm is fundamentally composed of two critical components: the definition of the particle and the fitness function utilized to evaluate the particle’s quality.
Regarding the particle definition, we employ a modeling approach that utilizes a vector of symbolic translations, specifically constrained to the interval [ 0.5 , 0.5 ) . These symbolic translations play a critical role in delineating how specific 2-tuple linguistic values are assigned to their corresponding linguistic terms during the transformation process. To elaborate on this, consider a scenario involving a set of linguistic terms S = { s 0 , s 1 , s 2 , s 3 , s 4 } used by three experts E = { e 1 , e 2 , e 3 } . Although these experts may refer to the same set of linguistic terms, their interpretations may vary due to their individual backgrounds, expertise, and perspectives. As a result, this diversity of interpretation can lead to the emergence of distinct 2-tuple linguistic values for identical terms. This phenomenon underscores the significance of acknowledging the subjective nature of language and the complexities that arise when various experts strive to convey similar concepts through linguistic expressions. Therefore, for each expert e h , h = 1 , 2 , 3 , a different mapping is formed: s 0 h = ( s 0 , α 0 h ) , s 1 h = ( s 1 , α 1 h ) , s 2 h = ( s 2 , α 2 h ) , s 3 h = ( s 3 , α 3 h ) , s 4 h = ( s 4 , α 4 h ) , being α 0 h , α 1 h , α 2 h , α 3 h , and α 4 h , the symbolic translations for the expert e h . In particular, if we consider g + 1 linguistic values and m experts, each particle is composed of ( g + 1 ) m symbolic translations. Then, each particle is modeled as [ α 0 1 α 1 1 α 2 1 α 3 1 α 4 1 α 0 2 α 1 2 α 2 2 α 3 2 α 4 2 α 0 3 α 1 3 α 2 3 α 3 3 α 4 3 ] in this particular example.
With respect to the definition of the fitness function f, the objective is to maximize the consistency level associated with the 2-tuple linguistic preference relations formed according to the symbolic translations contained in the particles. Consequently, the fitness function is defined as follows:
f = 1 m h = 1 m c l h
where c l h determines the consistency level of the 2-tuple linguistic preference relation P ¯ h associated with the linguistic preference relation P h expressed by the expert e h and whose value is calculated according to Equation (13).
In this framework, PSO seeks to maximize the values of the fitness function by refining the locations of the symbolic translations. This process entails the modification of symbolic translations associated with 2-tuple linguistic values while ensuring the preservation of the original linguistic terms provided by experts.
Although the proposed approach requires optimizing symbolic translations for all linguistic terms across all experts, the computational burden of the PSO stage remains moderate. The dimensionality of the optimization problem is defined by ( g + 1 ) m , which in typical GDM settings results in a relatively small number of decision variables. PSO exhibits rapid convergence in continuous, low-dimensional search spaces, and in our experiments the optimization process consistently stabilized within fewer than 300 iterations. For the case study presented in Section 4, using a population of 100 particles, the average runtime of the PSO procedure was below one second on a standard 3.1 GHz processor. This confirms that the personalization step introduces negligible computational cost relative to the overall GDM workflow. Moreover, once the optimal symbolic translations are obtained, no further optimization is required in subsequent stages, ensuring that the method remains scalable and efficient for practical applications.

3.3. Aggregation

In this phase, the objective is to establish a collective preference relation that effectively synthesizes the linguistic pairwise comparisons articulated by the group of experts. We begin with a set of m individual 2-tuple linguistic preference relations, denoted as P ¯ 1 , P ¯ 2 , , P ¯ m . Each of these relations encapsulates the subjective preferences of the experts concerning a variety of alternatives.
To consolidate this information, it is necessary to derive a collective 2-tuple fuzzy linguistic preference relation, represented as P ¯ c = ( p ¯ i j c ) . This process requires the implementation of an aggregation procedure that integrates the individual preferences into a unified framework. Each value p ¯ i l c is defined to reside within the set S × [ 0.5 , 0.5 ) , wherein it quantitatively represents the extent of preference that alternative x i has over alternative x k . In this study, this representation reflects the consensus reached among the most consistent experts, capturing a coherent collective judgment.
To facilitate a robust aggregation of the experts’ opinions, a 2-tuple linguistic Ordered Weighted Averaging (OWA) operator is employed. This operator not only assists in combining the individual preferences but also accommodates the varying levels of significance attributed to each expert’s opinion. By utilizing this methodology, the final collective 2-tuple linguistic preference relation is constructed to meaningfully reflect the group’s overall assessment, while preserving the nuances of the linguistic evaluations provided.
Definition 4
([32]). “A 2-tuple linguistic OWA operator of dimension n is defined as a function ϕ : ( S × [ 0.5 , 0.5 ) ) n S × [ 0.5 , 0.5 ) , that has a weighting vector associated with it, W = ( w 1 , , w n ) , with w i [ 0 , 1 ] , i = 1 n w i = 1 :
ϕ W ( p ¯ 1 , , p ¯ n ) = Δ ( i = 1 n w i Δ 1 ( p ¯ σ ( i ) ) ) , p ¯ i S × [ 0.5 , 0.5 )
being σ : { 1 , , n } { 1 , , n } a permutation defined on 2-tuple linguistic values, such that p ¯ σ ( i ) p ¯ σ ( i + 1 ) , i = 1 , , n 1 ; i.e., p ¯ σ ( i ) is the i-highest 2-tuple linguistic value in the set { p ¯ 1 , , p ¯ n } .”
An important consideration in the definition of an OWA operator is the method for deriving its weighting vector. In the work of Yager [41], a specific mathematical expression was introduced to compute it:
w i = Q ( i / n ) Q ( ( i 1 ) / n ) , i = 1 , , n
Equation (18) is significant because it allows for representation of the fuzzy majority concept, which reflects the idea of collective decision-making based on fuzzy logic [42]. Furthermore, it utilizes a fuzzy linguistic non-decreasing quantifier Q, which systematically categorizes and ranks values in a way that preserves their order [43]. This approach enhances the OWA operator’s flexibility and applicability in various contexts where uncertainty and imprecision are inherent.
The 2-tuple linguistic OWA operator overlooks the varying significance of individual experts in decision-making processes. However, a logical and fair assumption when addressing GDM problems is that greater importance should be assigned to those participants communicating the most consistent and reliable information. This approach acknowledges that GDM scenarios are inherently heterogeneous, meaning the contributions of experts can differ significantly in terms of reliability and relevance [12].
The incorporation of importance values into the aggregation process typically requires the transformation of preference values, designated as p ¯ i j , using a corresponding importance degree I. This transformation results in new values, indicated as p ˜ i j [44,45]. The process is executed through a defined transformation function t, expressed as p ˜ i j = t ( p ¯ i j , I ) [45]. An alternative approach that merits consideration involves utilizing importance or consistency degrees as the order-inducing values within the Induced Ordered Weighted Averaging (IOWA) operator, formalized as an extension of the OWA operator [46]. This extension facilitates a distinct reordering of the values intended for aggregation, thus allowing for greater flexibility in decision-making and enhancing the accuracy of the aggregation process based on the specific context and criteria involved. By implementing this methodology, the aggregation of preferences can more accurately reflect varying levels of importance and consistency among the criteria under consideration.
Definition 5
([32]). “A 2-tuple linguistic IOWA operator of dimension n is a function:
Φ W ( u 1 , p ¯ 1 , , u n , p ¯ n ) = Δ ( i = 1 n w i Δ 1 ( p ¯ σ ( i ) ) ) , p ¯ i S × [ 0.5 , 0.5 )
being σ a permutation of { 1 , , n } such that u σ ( i ) u σ ( i + 1 ) , i = 1 , , n 1 ; i.e., u σ ( 1 ) , p ¯ σ ( 1 ) is the linguistic 2-tuple with u σ ( i ) the i-th highest value in the set { u 1 , , u n } .”
In Definition 5, the reordering of the set of values designated for aggregation, { p ¯ 1 , , p ¯ n } , is dictated by the reordering of the corresponding values { u 1 , , u n } , which is based on their magnitudes. This utilization of the set of values { u 1 , , u n } has led to classify them as the values of an order-inducing variable, which corresponds to the argument variable [46].
In this context, to derive the corresponding weighting vector, Yager proposed a methodology for assessing the overall satisfaction associated with Q significant criteria ( u k ) or experts ( e k ) regarding the alternative x j [47]. This methodology involves ordering the satisfaction values intended for aggregation, after which the weighting vector linked to an IOWA operator that employs a linguistic quantifier Q is calculated according to:
w i = Q k = 1 i u σ ( k ) T Q k = 1 i 1 u σ ( k ) T
where T = k = 1 n u k denotes the total sum of importance, with σ representing the permutation applied to order the values for aggregation. This methodology for integrating importance degrees allocates a weight of zero to any experts whose importance degree is zero. In our analysis, we employ the consistency levels of the 2-tuple linguistic preference relations to derive the importance degrees related to each expert.
Definition 5 facilitates the creation of various operators. Notably, the set of consistency levels associated with the preference values, { c l i j 1 , , c l i j m } , can be employed to define an IOWA operator. The aggregation of the preference values, denoted as { p ¯ i j 1 , , p ¯ i j m } , can be achieved by ranking the experts according to their consistency, from the most consistent to the least consistent. This method produces an IOWA operator, Φ Q A C , which is the additive-consistency 2-tuple IOWA operator and can be regarded as an extension of the AC-IOWA operator [48]. Consequently, the formulation of the collective 2-tuple linguistic preference relation is:
p ¯ i k c = Φ Q A C ( c l i j 1 , p ¯ i j 1 , , c l i j m , p ¯ i j m )
In Equation (21), Q represents the fuzzy quantifier used to implement the fuzzy majority concept, and through Equation (20) to compute the weighting vector of the Φ Q A C operator.

3.4. Exploitation

In this phase, we leverage the information derived from the collective 2-tuple linguistic preference relation to systematically rank the available alternatives. The objective is to identify the “best” decision(s) that are most widely accepted among the most consistent experts within the group. To accomplish this, we can employ two established methods for assessing the degree of preference for each alternative. These methods, as discussed by Herrera-Viedma et al. [49], utilize OWA operators and incorporate the concept of fuzzy majority:
  • The Quantifier-Guided Dominance Degree ( Q G D D ). It is a metric that assesses the level of dominance one alternative possesses over others within a fuzzy majority context. This metric is defined through a specific mathematical framework that encapsulates the complexities of dominance in decision-making processes characterized by uncertainty and imprecision. By employing this approach, one can gain a deeper insight into the relationships among various alternatives, taking into account not only explicit preferences but also the nuanced comparative advantages that may arise in a fuzzy environment. It is defined as:
    Q G D D i = ϕ Q ( p ¯ i 1 c , p ¯ i 2 c , , p ¯ i ( i 1 ) c , p ¯ i ( i + 1 ) c , , p ¯ i n c )
  • The Quantifier-Guided Non-Dominance Degree ( Q G N D D ). It serves as a metric to assess the extent to which each alternative remains unaffected by the influence of a fuzzy majority among the remaining alternatives. This concept is essential in decision-making contexts, as it helps in identifying alternatives that maintain their uniqueness and value, despite the presence of other competitive options. The non-dominance degree is defined in the following manner:
    Q G N D D i = ϕ Q ( N e g ( p 1 i s ) , , N e g ( p ( i 1 ) i s ) , N e g ( p ( i + 1 ) i s ) , , N e g ( p n i s ) )
    where N e g ( p ¯ j i s ) = Δ ( g ( Δ 1 ( p ¯ j i s ) ) ) and p ¯ j i s represents the degree in which x i is strictly dominated by x j , which is computed as:
    p ¯ j i s = ( s 0 , 0 ) , if p ¯ j i c < p ¯ i j c Δ ( Δ 1 ( p ¯ j i c ) Δ 1 ( p ¯ i j c ) ) , if p ¯ j i c p ¯ i j c
The application of choice degrees over X can be executed through two principal policies: the sequential policy and the conjunctive policy. Consequently, within the framework of a comprehensive selection process, the implementation of choice degrees occurs in three distinct steps:
  • Applying each choice degree of alternatives over X to generate two sets of alternatives:
    X Q G D D = { x i X | Q G D D i = s u p x j X Q G D D j }
    X Q G N D D = { x i X | Q G N D D i = s u p x j X Q G N D D j }
    whose elements referred to as maximum dominance elements in the fuzzy majority of X, quantified by Q, and maximal non-dominated elements in the fuzzy majority of X, also quantified by Q, respectively.
  • Applying the conjunction selection policy to produce this collection of alternatives:
    X Q G C P = X Q G D D X Q G N D D
    When X Q G C P , the process finishes. Otherwise, it continues.
  • Applying one of the two sequential selection policies, according to either a dominance or non-dominance criterion:
    • Dominance-based sequential selection process. Applying the Q G D D over X to generate X Q G D D . When # ( X Q G D D ) = 1 , the process finished and this is the solution set. Otherwise, it continues producing:
      X Q G D D N D D = { x i X Q G D D | Q G N D D i = s u p x j X Q G D D Q G N D D j }
      It is the solution.
    • Non-dominance-based sequential selection process. Applying the Q G N D D over X to produce X Q G N D D . When # ( X Q G N D D ) = 1 , the process finishes and this is the solution set. Otherwise, it continues obtaining:
      X Q G N D D D D = { x i X Q G N D D | Q G D D i = s u p x j X Q G N D D Q G D D j }
      It is the solution.

4. Case Study

This section delineates a case study focused on the conversion of a non-residential building into a residential structure, based on the proposed methodology. The building under examination was constructed prior to 1917 and underwent reconstruction in 2019 under the purview of LLC “Engineering and Construction.” It is located on Nizhniy Val Street in the Podilskyi district of Kyiv, an area characterized by dense urban development and recognized as a central historical territory, which is also designated as an archaeological protection zone [50]. The primary objective of this study is the reconstruction of three floors of an eight-story brick building, which features a distinctive gabled roof covered with metal tiles. The maximum dimensions of the building are 46.94 m in length and 9.4 m in width. The structural framework comprises load-bearing interior and exterior walls made of brick, along with monolithic sections supported by metal beams, wooden beams, and a cork flooring system. The partitions within the building are also constructed from brick.
The project entails the exterior enhancement of the house, which includes the installation of metal–plastic windows, the implementation of a ventilation system (including air conditioners), as well as the integration of a water supply system and heating system, all in accordance with the specified project guidelines. It is important to note that the project does not entail the involvement or relocation of any existing structures situated on the design site, nor does it permit the removal of existing greenery.
A single, fully detailed case study was selected because the purpose of this article is not to perform a broad empirical comparison but to illustrate, step by step, how personalized linguistic semantics are operationalized within the proposed granular computing-based GDM model. Using multiple case studies would significantly increase the manuscript length without providing additional methodological insight. The chosen building refurbishment scenario offers a real-world, data-rich environment involving multiple experts and conflicting linguistic assessments. Within this context, we decided to focus specifically on the ventilation system because it is the subsystem that most clearly reflects the multidimensional nature of linguistic evaluations. Criteria such as noise level, airflow efficiency, durability, and cost often generate divergent expert judgments due to professional background and experience. This makes ventilation an ideal example to demonstrate the usefulness of personalized semantic granulation. Other systems exhibit less semantic variability or fewer conflicting criteria, and therefore provide less illustrative value for showing the strengths of the proposed approach. According to the project case study, the systems for water supply, heating, ventilation, windows, and roofing necessitate renovation within the building. However, this analysis focuses specifically on the ventilation component.
The ventilation system in the residential building incorporates both natural and mechanical methods. Currently, the building features REHAU S710 profile plastic windows, which have deteriorated over time, leading to sub-optimal ventilation conditions. To comply with the air quality parameters established by current regulations, the design includes supply and exhaust ventilation systems with mechanical assistance. The capacity of these ventilation systems is determined by the normative air exchange rates on a per capita basis. Furthermore, the project includes provisions for the installation of outdoor air conditioning units. The air ducts are constructed from galvanized sheet steel.
Subsequently, an analysis is conducted to assess the significance, utility, and priority of the renovated ventilation system. The ventilation project will be developed based on defined initial data:
  • Estimated wind speed:
    Cold season: 2.8 m/s.
    Warm season: 2.1 m/s.
  • Current building codes and regulations:
    Estimated summer temperature for ventilation design: +23 °C.
    Estimated winter temperature for heating and ventilation design: 22 °C.
    DBN B.2.5-67: 2013 “Heating, ventilation, and air conditioning” [51].
  • Architectural-construction drawings.
The estimated air circulation in both general and auxiliary rooms is determined based on the required air exchange standards. In areas where harmful air is present, the design accounts for adequate circulation to ensure its removal. The amount of outdoor air needed in administrative spaces is calculated in accordance with DBN B.2.5-67:2013 [51]. In parking lots, the air supply is concentrated in the driveway, while air extraction is evenly distributed between the lower and upper zones. Ventilation equipment for the parking area is installed within the ventilation chambers of the basement. It is important to note that exhaust air will exceed the supply according to item 8.39 of DBN B.2.3-15-2007 [52]. In administrative rooms, ventilation system productivity is intentionally reduced to a single air exchange during non-working hours to conserve energy resources. Supply and exhaust air ducts are routed concealed within ventilation shafts and behind architectural structures. These ducts are made from galvanized steel as per GOST 19904-74 [53] and adhere to the density class specified in DBN B.2.5-67-2013 [51].
Exhaust emissions in architectural mines are managed through exhaust holes positioned 1.0 m below the roof. Air intake is set at 2.0 m above ground, with a minimum separation of 8.0 m between intake and outlet. To minimize ventilation noise, pumps and fans are installed on vibration-insulating bases, fans and ducts are connected with flexible inserts, and mufflers are used.
The ventilation systems of five companies were evaluated using four indicators: maximum pressure, price, durability, and level of noise. The selected companies were BHTC (Ukraine, x 1 ), VIKMAS LTD (Ukraine, x 2 ), BT-Service (Ukraine, x 3 ), KERMI (Germany, x 4 ), and Danfoss (Denmark, x 5 ). Four ventilation specialists provided their preferences using the linguistic term set S = { s 0 , s 1 , s 2 , s 3 , s 4 } , being s 0 = “Much Worse” ( M W ), s 1 = “Worse” (W), s 2 = “Equal” (E), s 3 = “Better” (B), and s 4 = “Much Better” ( M B ). The linguistic preference relations given were:
P 1 = W E E B M B M B M B E E W M B W E M W W M W M W E B M B P 2 = E W M B E E M B M B M W B M W E M B M W M W E M W E B W M B
P 3 = M W B M B M B B E W W M W E E W W B E B M W B M B W P 4 = W M B M B B B M W M W E M W M B B W M W M B W M B W E B M W
Following the collection of linguistic preference relations from the four ventilation specialists, we utilized the information granulation approach described in Section 3.2 to personalize and operationalize the linguistic values.
The parameters for the PSO framework were determined through extensive experimentation. A swarm size of 100 particles was selected, as this configuration yielded consistent results across multiple runs. The number of generations was fixed at 300 because no further improvement in fitness function values was observed beyond this threshold. Both c 1 and c 2 were set to 2 in accordance with established practices in the literature [39]. The inertia weight ω was programmed to decrease linearly from 0.9 to 0.4 . The PSO returned the following vector of symbolic translations:
[ 0.45 0.39 0.11 0.33 0.45 0.41 0.44 0.14 0.45 0.35 0.39 0.35 0.45 0.40 0.38 0.45 0.45 0.06 0.45 0.45 ]
Each expert’s linguistic values are mapped to the following 2-tuple linguistic values based on the elements of this vector:
s ¯ 0 1 = ( M W , 0.45 ) s ¯ 1 1 = ( W , 0.39 ) s ¯ 2 1 = ( E , 0.11 ) s ¯ 3 1 = ( B , 0.33 ) s ¯ 4 1 = ( M B , 0.45 ) s ¯ 0 2 = ( M W , 0.41 ) s ¯ 1 2 = ( W , 0.44 ) s ¯ 2 2 = ( E , 0.14 ) s ¯ 3 2 = ( B , 0.45 ) s ¯ 4 2 = ( M B , 0.35 ) s ¯ 0 3 = ( M W , 0.39 ) s ¯ 1 3 = ( W , 0.35 ) s ¯ 2 3 = ( E , 0.45 ) s ¯ 3 3 = ( B , 0.40 ) s ¯ 4 3 = ( M B , 0.38 ) s ¯ 0 4 = ( M W , 0.43 ) s ¯ 1 4 = ( W , 0.41 ) s ¯ 2 4 = ( E , 0.06 ) s ¯ 3 4 = ( B , 0.39 ) s ¯ 4 4 = ( M B , 0.42 )
It means the linguistic preference relations initially expressed by the four ventilation specialists are converted into the following four 2-tuple linguistic preference relations:
P ¯ 1 = ( W , 0.39 ) ( E , 0.11 ) ( E , 0.11 ) ( B , 0.33 ) ( M B , 0.45 ) ( M B , 0.45 ) ( M B , 0.45 ) ( E , 0.11 ) ( E , 0.11 ) ( W , 0.39 ) ( M B , 0.45 ) ( W , 0.39 ) ( E , 0.11 ) ( M W , 0.45 ) ( W , 0.39 ) ( M W , 0.45 ) ( M W , 0.45 ) ( E , 0.11 ) ( B , 0.33 ) ( M B , 0.45 ) P ¯ 2 = ( E , 0.14 ) ( W , 0.44 ) ( M B , 0.35 ) ( E , 0.14 ) ( E , 0.14 ) ( M B , 0.35 ) ( M B , 0.35 ) ( M W , 0.41 ) ( B , 0.45 ) ( M W , 0.41 ) ( E , 0.14 ) ( M B , 0.35 ) ( M W , 0.41 ) ( M W , 0.41 ) ( E , 0.14 ) ( M W , 0.41 ) ( E , 0.14 ) ( B , 0.45 ) ( W , 0.44 ) ( M B , 0.35 ) P ¯ 3 = ( M W , 0.39 ) ( B , 0.40 ) ( M B , 0.38 ) ( M B , 0.38 ) ( B , 0.40 ) ( E , 0.45 ) ( W , 0.35 ) ( W , 0.35 ) ( M W , 0.39 ) ( E , 0.45 ) ( E , 0.45 ) ( W , 0.35 ) ( W , 0.35 ) ( B , 0.40 ) ( E , 0.45 ) ( B , 0.40 ) ( M W , 0.39 ) ( B , 0.40 ) ( M B , 0.38 ) ( W , 0.35 ) P ¯ 4 = ( W , 0.41 ) ( M B , 0.42 ) ( M B , 0.42 ) ( B , 0.39 ) ( B , 0.39 ) ( M W , 0.43 ) ( M W , 0.43 ) ( E , 0.06 ) ( M W , 0.43 ) ( M B , 0.42 ) ( B , 0.39 ) ( W , 0.41 ) ( M W , 0.43 ) ( M B , 0.42 ) ( W , 0.41 ) ( M B , 0.42 ) ( W , 0.41 ) ( E , 0.06 ) ( B , 0.39 ) ( M W , 0.43 )
The 2-tuple linguistic preference relations are aggregated using the Φ Q A C operator. The consistency degree of the linguistic preference values serves as the order-inducing variable. For aggregation, the linguistic quantifier “most of,” modeled as Q ( r ) = r 1 / 2 , is employed. By applying Equation (20), a weighting vector with four values is generated to calculate the collective 2-tuple linguistic preference relation:
P ¯ c = ( W , 0.44 ) ( E , 0.29 ) ( B , 0.33 ) ( E , 0.33 ) ( E , 0.38 ) ( B , 0.28 ) ( B , 0.10 ) ( E , 0.24 ) ( E , 0.37 ) ( E , 0.41 ) ( E , 0.13 ) ( E , 0.45 ) ( W , 0.16 ) ( W , 0.09 ) ( W , 0.48 ) ( W , 0.21 ) ( W , 0.45 ) ( E , 0.14 ) ( B , 0.41 ) ( B , 0.09 )
For instance, p ¯ 54 c is calculated as:
p ¯ 54 c = Δ ( w 1 Δ 1 ( p ¯ 54 2 ) + w 2 Δ 1 ( p ¯ 54 1 ) + w 3 Δ 1 ( p ¯ 54 3 ) + w 4 Δ 1 ( p ¯ 54 4 ) ) = ( B , 0.09 )
where:
c l 54 1 = 0.84 c l 54 2 = 0.88 c l 54 3 = 0.82 c l 54 4 = 0.65 p ¯ 54 1 = ( M B , 0.45 ) p ¯ 54 2 = ( M B , 0.35 ) p ¯ 54 3 = ( W , 0.35 ) p ¯ 54 4 = ( M W , 0.43 ) σ ( 1 ) = 2 σ ( 2 ) = 1 σ ( 3 ) = 3 σ ( 4 ) = 4 w 1 = 0.53 w 2 = 0.20 w 3 = 0.16 w 4 = 0.11
and where the weighting vector W = ( w 1 , w 2 , w 3 , w 4 ) has been calculated considering that T = c l 54 1 + c l 54 2 + c l 54 3 + c l 54 4 = 0.84 + 0.88 + 0.82 + 0.65 = 3.19 and:
w 1 = Q c l 54 2 T 0 = 0.53 0 = 0.53 w 2 = Q c l 54 2 + c l 54 1 T Q c l 54 2 T = 0.73 0.53 = 0.20 w 3 = Q c l 54 2 + c l 54 1 + c l 54 3 T Q c l 54 2 + c l 54 1 T = 0.89 0.73 = 0.16 w 4 = Q c l 54 2 + c l 54 1 + c l 54 3 + c l 54 4 T Q c l 54 2 + c l 54 1 + c l 54 3 T = 1.00 0.89 = 0.11
Utilizing again the linguistic quantifier “most of” and Equation (18), the weighting vector W = ( w 1 , w 2 , w 3 , w 4 ) obtained is:
w 1 = Q ( 1 / 4 ) 0 = 0.50 0 = 0.50 w 2 = Q ( 2 / 4 ) Q ( 1 / 4 ) = 0.71 0.50 = 0.21 w 3 = Q ( 3 / 4 ) Q ( 2 / 4 ) = 0.87 0.71 = 0.16 w 4 = Q ( 1 ) Q ( 3 / 4 ) = 1 0.87 = 0.13
The quantifier-guided dominance and non-dominance degrees, Q G D D and Q G N D D , of all the alternatives are:
x 1 x 2 x 3 x 4 x 5 Q G D D ( B , 0.29 ) ( B , 0.37 ) ( E , 0.14 ) ( W , 0.32 ) ( B , 0.47 ) Q G N D D ( M B , 0.03 ) ( M B , 0.05 ) ( M B , 0.45 ) ( B , 0.26 ) ( M B , 0.12 )
The maximal sets are X Q G D D = { x 1 } and X Q G N D D = { x 1 } and, therefore, applying the conjunction selection policy we get:
X Q G C P = X Q G D D X Q G N D D = { x 1 }
It means that, according to “most of the” most consistent experts, the best company to provide the ventilation systems is BHTC ( x 1 ).

5. Comparative Analysis

Several concurrent research lines address related challenges in linguistic group decision-making, including granular-computing-based approaches that optimize linguistic intervals, multi-granular linguistic models, and type-2 fuzzy frameworks designed to capture semantic variability. To position our contribution within this landscape, we compare the proposed upgrade with representative methods from these lines of research. While existing approaches either assume common semantics, model linguistic terms through intervals, or use complex type-2 structures, our method distinguishes itself by directly personalizing the symbolic translation of each linguistic value using an optimization-driven mechanism. This personalization yields higher consistency levels and more faithful semantic alignment with individual experts. The comparison confirms that the proposed model constitutes a substantive methodological improvement over existing upgrades.
To analyze the performance of the proposed approach, denoted as personalized-approach, we consider one method where linguistic values are directly converted into 2-tuple linguistic values by setting the symbolic translations to 0, denoted as direct-approach. We also consider a method where a unified realization is carried out, so all linguistic values are converted into the same 2-tuple linguistic values for all experts, denoted as unified-approach. We conduct an experiment in which 100 randomly generated GDM problems are used. These problems vary in the number of alternatives, experts, and linguistic terms. Figure 2 shows the value of the optimization criterion f (global consistency level) achieved by each approach. The values for the granular approaches (the unified-approach and personalized-approach) are clearly higher than those for the direct-approach. This suggests that granular approaches can operationalize linguistic values and enhance consistency. Additionally, the values from the personalized-approach are higher than those from the unified-approach. This result verifies the superiority of the proposed approach, which achieves greater consistency levels by personalizing the semantics of the linguistic values.
To verify if the rankings of the alternatives produced by the direct-approach, the unified-approach, and the personalized-approach are different in the 100 GDM problems randomly generated previously, let R d , R u , and R p be the rankings these approaches yield. These rankings are obtained after applying the aggregation and exploitation phases described in Section 3, where the Q G D D is utilized to generate the rankings. Again, Q ( r ) = r 1 / 2 is used to represent the linguistic quantifier “most of”, which is utilized in Equations (18) and (20) to produce the weights of the OWA and the additive-consistency 2-tuple IOWA operators. The l 1 -norm distances between R d and R p , and between R u and R p , are calculated. These distances measure the decision discrepancies between the direct-approach and the personalized-approach, D D d p , and between the unified-approach and the personalized-approach, D D u p , respectively:
D D d p = 1 n 1 n | r i d r i p |
D D u p = 1 n 1 n | r i u r i p |
Figure 3 and Figure 4 highlight the differences in decisions reached by the direct-approach and the personalized-approach, and between the unified-approach and the personalized-approach, respectively. Most decision discrepancy values are greater than 0, especially when compared with the direct-approach (see Figure 3). Only a few GDM problems show no discrepancy at all. This demonstrates that the personalized-approach often leads to distinct outcomes compared with the other methods.

6. Concluding Remarks

In this study, we introduced a new approach using a granular computing framework to personalize linguistic information in GDM processes. We described the granulation of personalized linguistic information and its optimization using the PSO technique. Unlike existing approaches based on granular computing for computing with words, it formalizes each linguistic value through a personalized linguistic 2-tuple (information granule). The case study and the numerical experiments carried out demonstrated the performance and effectiveness of the proposed approach to solve GDM processes. In comparison with the granular approach based on a unified realization of the linguistic values, we showed that the proposed granular approach enables the personalization of linguistic values, resulting in higher consistency levels. Furthermore, it accounts for the possibility that linguistic values may convey slightly distinct meanings, even when utilized by all experts. Building on these critical insights, the proposed approach can improve GDM processes and decisions, ensuring both accuracy and alignment with individual understandings and preferences.
We propose to advance this research in two directions. Previously, we assumed all experts use a single linguistic label to state preferences between alternatives. However, single linguistic terms often fail to capture the depth of expert knowledge, making it necessary to develop richer, easily interpretable linguistic expressions that convey more nuanced information, such as extended comparative linguistic expressions with symbolic translation [54]. Additionally, our focus has been on the consistency of individual experts, but the collective agreement among their preferences must also be addressed. The consensus-reaching process is essential in GDM models and has been widely studied in similar contexts. Thus, a GDM model that integrates a consensus-reaching process for personalized linguistic values should be developed.
Although the proposed approach demonstrates strong capabilities for personalizing linguistic semantics in GDM environments, several limitations should be acknowledged. First, the method relies on the optimization of symbolic translations through PSO, whose performance may depend on parameter settings or population size, even though the computational effort is modest. Second, the approach assumes that experts provide complete linguistic preference relations; scenarios involving incomplete or missing evaluations are not addressed in this version of the model. Third, the personalization of linguistic semantics is performed independently for each expert, which may limit the integration of shared semantic structures in very large groups. Finally, the method was tested on a single real-world case study to illustrate its functioning; thus, broader empirical validation in diverse domains would be beneficial. These aspects open several avenues for future research, including the extension to incomplete linguistic information, hybrid optimization schemes, and large-scale experimental analysis across heterogeneous decision-making scenarios.

Author Contributions

Conceptualization, A.E.-V. and F.J.C.; methodology, F.J.C. and Y.Z.-V.; software, A.E.-V. and F.J.C.; validation, A.E.-V. and J.R.T.; formal analysis, Y.Z.-V. and J.R.T.; investigation, A.E.-V. and F.J.C.; writing—original draft preparation, A.E.-V.; writing—review and editing, F.J.C.; visualization, A.E.-V. and J.R.T.; supervision, F.J.C. and Y.Z.-V.; funding acquisition, F.J.C. and A.E.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the grant PID2022-139297OB-I00 funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU, and by the research grant from the Asociación Universitaria Iberoamericana de Postgrado (AUIP).

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Islei, G.; Lockett, G. Group decision making: Suppositions and practice. Socio-Econ. Plan. Sci. 1991, 25, 67–81. [Google Scholar] [CrossRef]
  2. Saaty, T.L.; Peniwati, K. Group Decision Making: Drawing Out and Reconciling Differences; RWS Publications: Pittsburgh, PA, USA, 2013. [Google Scholar]
  3. Herrera-Viedma, E.; Palomares, I.; Li, C.C.; Cabrerizo, F.J.; Dong, Y.C.; Chiclana, F.; Herrera, F. Revisiting fuzzy and linguistic decision making: Scenarios and challenges for making wiser decisions in a better way. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 191–208. [Google Scholar] [CrossRef]
  4. Millet, I. The effectiveness of alternative preference elicitation methods in the analytic hierarchy process. J. Multi-Criteria Decis. Anal. 1997, 6, 41–51. [Google Scholar] [CrossRef]
  5. Xu, Z.S. A survey of preference relations. Int. J. Gen. Syst. 2007, 36, 179–203. [Google Scholar] [CrossRef]
  6. Li, C.C.; Dong, Y.C.; Xu, Y.; Chiclana, F.; Herrera-Viedma, E.; Herrera, F. An overview on managing additive consistency of reciprocal preference relations for consistency-driven decision making and fusion: Taxonomy and future directions. Inf. Fusion 2019, 52, 143–156. [Google Scholar] [CrossRef]
  7. Zhang, C.; Luo, D.; Su, W.; Benjamin, L. Optimal ranking model of fuzzy preference relations with self-confidence for addressing self-confidence failure. Eur. J. Oper. Res. 2025, 322, 615–628. [Google Scholar] [CrossRef]
  8. Liu, W.; Zhang, H.; Chen, X.; Yu, S. Managing consensus and self-confidence in multiplicative preference relations in group decision making. Knowl.-Based Syst. 2018, 162, 62–73. [Google Scholar] [CrossRef]
  9. Zadeh, L.A. Fuzzy logic = computing with words. IEEE Trans. Fuzzy Syst. 1996, 4, 103–111. [Google Scholar] [CrossRef]
  10. Zadeh, L.A. Towards a theory of fuzzy systems. In Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A. Zadeh; World Scientific: Singapore, 1996; pp. 83–104. [Google Scholar]
  11. Herrera, F.; Alonso, S.; Chiclana, F.; Herrera-Viedma, E. Computing with words in decision making: Foundations, trends and prospects. Fuzzy Optim. Decis. Mak. 2009, 8, 337–364. [Google Scholar] [CrossRef]
  12. Cabrerizo, F.J.; Herrera-Viedma, E.; Pedrycz, W. A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts. Eur. J. Oper. Res. 2013, 320, 624–633. [Google Scholar] [CrossRef]
  13. Cabrerizo, F.J.; Morente-Molinera, J.A.; Pedrycz, W.; Taghavi, A.; Herrera-Viedma, E. Granulating linguistic information in decision making under consensus and consistency. Expert Syst. Appl. 2018, 99, 83–92. [Google Scholar] [CrossRef]
  14. Pedrycz, W.; Song, M. A granulation of linguistic information in AHP decision-making problems. Inf. Fusion 2014, 17, 93–101. [Google Scholar] [CrossRef]
  15. Callejas, E.A.; Cerrada, J.A.; Cerrada, C.; Cabrerizo, F.J. Group decision making based on a framework of granular computing for multi-criteria and linguistic contexts. IEEE Access 2019, 7, 54670–54681. [Google Scholar] [CrossRef]
  16. Su, H.; Wu, Q.; Tang, X.; Huang, T. A consistency and consensus-driven approach for granulating linguistic information in GDM with distributed linguistic preference relations. Artif. Intell. Rev. 2023, 56, 6627–6659. [Google Scholar] [CrossRef]
  17. Pedrycz, W. The principle of justifiable granularity and an optimization of information granularity allocation as fundamentals of granular computing. J. Inf. Process Syst. 2011, 7, 397–412. [Google Scholar] [CrossRef]
  18. Huang, T.; Tang, X.; Zhao, S.; Zhang, Q.; Pedrycz, W. Linguistic informaiton-based granular computing based on a tournament selection operator-guided PSO for supporting multi-attribute group decision-making with distributed linguistic preference relations. Inf. Sci. 2022, 610, 488–507. [Google Scholar] [CrossRef]
  19. Zhang, Q.; Huang, T.; Tang, X.; Xu, K.; Pedrycz, W. A linguistic information granulation model and its penalty function-based co-evolutionary PSO solution approach for supporting GDM with distributed linguistic preference relations. Inf. Fusion 2022, 77, 118–132. [Google Scholar] [CrossRef]
  20. Mendel, J.M.; Wu, D. Perceptual Computing: Aiding in Making Subjective; Wiley-IEEE Press: Hoboken, NJ, USA, 2010. [Google Scholar]
  21. Mendel, J.M. Computing with words and its relationthips with fuzzistics. Inf. Sci. 2007, 177, 988–1006. [Google Scholar] [CrossRef]
  22. Dong, Y.C.; Zha, Q.; Zhang, H.; Kou, G.; Fujita, H.; Chiclana, F.; Herrera-Viedma, E. Consensus reaching in social network group decision making: Research paradigms and challenges. Knowl.-Based Syst. 2018, 162, 3–13. [Google Scholar] [CrossRef]
  23. Morente-Molinera, J.A.; Pérez, I.J.; Urena, R.; Herrera-Viedma, E. On multi-granular fuzzy linguistic modeling in group decision making problems: A systematic review and future trends. Knowl.-Based Syst. 2015, 74, 49–60. [Google Scholar] [CrossRef]
  24. Mendel, J.M. Type-2 fuzzy sets as well as computing with words. IEEE Comput. Intell. Mag. 2019, 14, 82–95. [Google Scholar] [CrossRef]
  25. Herrera, F.; Martinez, L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2000, 8, 746–752. [Google Scholar] [CrossRef]
  26. Dong, Y.C.; Xu, Y.F.; Yu, S. Computing the numerical scale of the linguistic term set for the 2-tuple fuzzy linguistic representation model. IEEE Trans. Fuzzy Syst. 2009, 17, 1366–1378. [Google Scholar] [CrossRef]
  27. Dong, Y.C.; Li, C.C.; Herrera, F. Connecting the linguistic hierarchy and the numerical scale for the 2-tuple linguistic model and its use to deal with hesitatn unbalanced linguistic information. Inf. Sci. 2016, 367–368, 259–278. [Google Scholar] [CrossRef]
  28. Li, C.C.; Dong, Y.C.; Herrera, F.; Herrera-Viedma, E.; Martinez, L. Personalized individual semantics in computing with words for supporting linguistic group decision making. Inf. Fusion 2017, 33, 29–40. [Google Scholar] [CrossRef]
  29. Li, C.C.; Dong, Y.C.; Chiclana, F.; Herrera-Viedma, E. Consistency-driven methodology to manage incomplete linguistic preference relation: A perspective based on personalized individual semantics. IEEE Trans. Cybern. 2022, 52, 6170–6180. [Google Scholar] [CrossRef]
  30. Li, Y.; Chen, M.; Li, C.C.; Dong, Y.C.; Herrera, F. Measuring additive consistency of linguistic preference relations in a personlized-individual-semantics context: A systematic investigation with axiomatic design. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 2613–2624. [Google Scholar] [CrossRef]
  31. Martinez, L.; Rodríguez, R.M.; Herrera, F. The 2-Tuple Linguistic Model. Computing with Words in Decision Making; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  32. Cabrerizo, F.J.; Heradio, R.; Pérez, I.J.; Herrera-Viedma, E. A selection process based on additive consistency to deal with incomplete fuzzy linguistic information. J. Univers. Comput. Sci. 2010, 16, 62–81. [Google Scholar] [CrossRef]
  33. Marin Diaz, G.; Galdon Salvador, J.L. Group decision-making model based on 2-tuple fuzzy linguistic model and AHP applied to measuring digital maturity level of organizations. Systems 2023, 11, 341. [Google Scholar] [CrossRef]
  34. Jin, F.; Guo, S.; Cai, Y.; Liu, J.; Zhou, L. 2-tuple linguistic decision-making with consistency adjustment strategy and data envelopment analysis. Eng. Appl. Artif. Intell. 2023, 118, 105671. [Google Scholar] [CrossRef]
  35. Yao, S.; Khalid, A. Completing 2-tuple linguistic preference relations based on upper bound condition. Soft Comput. 2018, 22, 6215–6227. [Google Scholar] [CrossRef]
  36. Herrera-Viedma, E.; Herrera, F.; Chiclana, F.; Luque, M. Some issues on consistency of fuzzy preference relations. Eur. J. Oper. Res. 2004, 154, 98–109. [Google Scholar] [CrossRef]
  37. Tanino, T. Fuzzy preference orderings in group decision making. Fuzzy Sets Syst. 1984, 12, 117–131. [Google Scholar] [CrossRef]
  38. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  39. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  40. Zhang, T.; Zhang, Y.; Ma, F.; Peng, C.; Yue, D.; Pedrycz, W. Local boundary fuzzified rough k-means-based information granulation algorithm under the principle of justifiable granularity. IEEE Trans. Cybern. 2024, 54, 519–532. [Google Scholar] [CrossRef]
  41. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  42. Kacprzyk, J. Group decision making with a fuzzy linguistic majority. Fuzzy Sets Syst. 1986, 18, 105–118. [Google Scholar] [CrossRef]
  43. Zadeh, L.A. A computational approach to fuzzy quantifiers in natural languages. Comput. Math. Appl. 1983, 9, 149–184. [Google Scholar] [CrossRef]
  44. Herrera, F.; Herrera-Viedma, E. Aggregation operators for linguistic weighted information. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 1997, 27, 646–656. [Google Scholar] [CrossRef]
  45. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. Choice processes for non-homogeneous group decision making in linguistic setting. Fuzzy Sets Syst. 1998, 94, 287–308. [Google Scholar] [CrossRef]
  46. Yager, R.R.; Filev, D.P. Induced ordered weighted averaging operators. IEEE Trans. Syst. Man Cybern. Syst. 1999, 29, 141–150. [Google Scholar] [CrossRef]
  47. Yager, R.R. Quantifier guided aggregation using OWA operators. Int. J. Intell. Syst. 1996, 11, 49–73. [Google Scholar] [CrossRef]
  48. Chiclana, F.; Herrera-Viedma, E.; Herrera, F.; Alonso, S. Some induced ordered weighted averaging operators and their use for solving group decision-making problems based on fuzzy preference relations. Eur. J. Oper. Res. 2007, 182, 383–399. [Google Scholar] [CrossRef]
  49. Herrera-Viedma, E.; Chiclana, F.; Herrera, F.; Alonso, S. A group decision-making model with incomplete fuzzy preference relations based on additive consistency. IEEE Trans. Syst. Man Cybern. B Cybern. 2007, 37, 176–189. [Google Scholar] [CrossRef] [PubMed]
  50. Velykorusova, A.; Zavadskas, E.K.; Tupenaite, L.; Kanapeckiene, L.; Migilinskas, D.; Kutut, V.; Ubarte, I.; Abaravicius, Z.; Kaklauskas, A. Intelligent multi-criteria decision support for renovation solutions for a building based on emotion recognition by applying the COPRAS method and BIM integration. Appl. Sci. 2023, 13, 5453. [Google Scholar] [CrossRef]
  51. DBN B.2.5-67:2013; Heating, Ventilation and Air Conditioning. Ministry of Regional Development, Construction and Housing of Ukraine: Kyiv, Ukraine, 2013.
  52. DBN B.2.3-15:2007; Parking Lots and Garages for Passenger Cars. Derzhbud of Ukraine: Kyiv, Ukraine, 2007.
  53. GOST 19904-74; Cold-Rolled Sheet. Range. State Committee of Standards of the Council of Ministers of the USSR: Moscow, USSR, 1974.
  54. Romero, A.L.; Rodriguez, R.M.; Martinez, L. Computing with comparative linguistic expressions and symbolic translation for decision making: ELICIT information. IEEE Trans. Fuzzy Syst. 2020, 28, 2510–2522. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed method.
Figure 1. Flow chart of the proposed method.
Electronics 14 04698 g001
Figure 2. Results achieved by the personalized-approach vs. unified-approach vs. direct-approach.
Figure 2. Results achieved by the personalized-approach vs. unified-approach vs. direct-approach.
Electronics 14 04698 g002
Figure 3. Decision discrepancy between the direct- and personalized-approaches.
Figure 3. Decision discrepancy between the direct- and personalized-approaches.
Electronics 14 04698 g003
Figure 4. Decision discrepancy between the unified- and personalized-approaches.
Figure 4. Decision discrepancy between the unified- and personalized-approaches.
Electronics 14 04698 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Estrada-Velazco, A.; Zulueta-Véliz, Y.; Trillo, J.R.; Cabrerizo, F.J. An Approach Based on Granular Computing and 2-Tuple Linguistic Model to Personalize Linguistic Information in Group Decision-Making. Electronics 2025, 14, 4698. https://doi.org/10.3390/electronics14234698

AMA Style

Estrada-Velazco A, Zulueta-Véliz Y, Trillo JR, Cabrerizo FJ. An Approach Based on Granular Computing and 2-Tuple Linguistic Model to Personalize Linguistic Information in Group Decision-Making. Electronics. 2025; 14(23):4698. https://doi.org/10.3390/electronics14234698

Chicago/Turabian Style

Estrada-Velazco, Aylin, Yeleny Zulueta-Véliz, José Ramón Trillo, and Francisco Javier Cabrerizo. 2025. "An Approach Based on Granular Computing and 2-Tuple Linguistic Model to Personalize Linguistic Information in Group Decision-Making" Electronics 14, no. 23: 4698. https://doi.org/10.3390/electronics14234698

APA Style

Estrada-Velazco, A., Zulueta-Véliz, Y., Trillo, J. R., & Cabrerizo, F. J. (2025). An Approach Based on Granular Computing and 2-Tuple Linguistic Model to Personalize Linguistic Information in Group Decision-Making. Electronics, 14(23), 4698. https://doi.org/10.3390/electronics14234698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop