Next Article in Journal
Systematic Search for Solvable Potentials from Biconfluent, Doubly Confluent, and Triconfluent Heun Equations
Previous Article in Journal
Special Issue: Symmetry in Hadron Physics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Granulation Variable Precision Fuzzy Rough Set Based on Generalized Fuzzy Remote Neighborhood Systems and the MADM Application Design of a Novel VIKOR Method

1
Reading Academy, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(1), 84; https://doi.org/10.3390/sym18010084
Submission received: 30 November 2025 / Revised: 30 December 2025 / Accepted: 31 December 2025 / Published: 3 January 2026
(This article belongs to the Special Issue Symmetry and Fuzzy Set)

Abstract

Variable precision fuzzy rough sets (VPFRSs) and multi-granulation fuzzy rough sets (MGFRSs) are both significant extensions of rough sets. However, existing variable precision models generally lack the inclusion property, which poses potential risks in applications. Meanwhile, multi-granulation models tend to emphasize either optimistic or pessimistic scenarios but overlook compromise situations. A generalized fuzzy remote neighborhood system is a symmetric union-fuzzified form of the neighborhood system, which can extend the fuzzy rough set model to a more general framework. Moreover, semi-grouping functions eliminate the left-continuity required for grouping functions and the associativity in t-conorms, making them more suitable for information aggregation. Therefore, to overcome the limitations of existing models, we propose an optimistic (OP), pessimistic (PE), and compromise (CO) variable precision fuzzy rough set (OPCAPFRS) based on generalized fuzzy remote neighborhood systems. The semi-grouping function and its residual minus are employed in the OPCAPFRS. We discuss the basic properties of the OPCAPFRS and prove that it satisfies the generalized inclusion property (GIP). This partially addresses the issue that a VPFRS cannot fulfill the inclusion property. A novel methodology for addressing multi-attribute decision-making (MADM) problems is developed through the fusion of the proposed OPCAPFRS framework and the VIKOR technique. The proposed method is applied to the problem of selecting an optimal CPU. Subsequently, comparative experiments and a parameter analysis are conducted to validate the effectiveness and stability of the proposed method. Finally, three sets of experiments are performed to verify the reliability and robustness of the new approach. It should be noted that the new method performed ranking on a dataset containing nearly ten thousand samples, obtaining both the optimal solution and a complete ranking, thereby validating its scalability.

1. Introduction

Rough set theory (RST), developed by Pawlak [1], effectively handles unpredictability arising from information granularity. However, its reliance on crisp sets and equivalence relations limits its ability to model the vagueness inherent in real-world concepts. To address this limitation, Dubois [2] extended the framework by integrating fuzzy set theory, proposing fuzzy rough sets to enhance its capability in handling uncertainty problems. Subsequently, extensive research has been dedicated to enriching the theoretical framework of fuzzy rough sets [3,4,5] and applying them to various fields such as decision analysis [6], feature extraction [7,8,9], image processing [10], and text classification [11].

1.1. An Overview of Variable Precision and Multi-Granulation Rough Sets

Variable precision rough sets (VPRSs) constitute a significant and impactful extension of RST. RST exhibits limited applicability due to its sensitivity to misclassification and noise interference in real-world data. To overcome these limitations, Ziarko [12] introduced the VPRS model. This framework incorporates a controlled error-tolerance parameter, which relaxes the strict boundary conditions to enhance robustness and applicability in processing noisy, uncertain data. In [13], D’eer conducted a critical evaluation of certain fuzzy rough set models, thereby establishing a consolidated mathematical foundation. A recognized limitation of these VPFRS models was their failure to satisfy the inclusion property, a fundamental requirement stating that the lower approximation must be contained within the upper. This deficiency elevated application risks, motivating subsequent research efforts to develop improved, more robust model versions. Notably, Jia [14] proposed a novel VPFRS model that satisfied the inclusion property and also incorporated the multi-granulation idea to address these risks.
Multi-granulation rough sets (MGRSs) also stand as a major advancement beyond RST. A key recognized limitation of RST is its dependence on a single equivalence relation for set approximations, which restricts its descriptive ability to a sole granular perspective. To address this limitation, Qian [15] proposed the multi-granulation rough set (MGRS) model, which enhances flexibility in decision-making scenarios by integrating multiple equivalence relations (granulations). In his subsequent work [16], Qian focused on the PE multi-granulation model and proposed a new two-stage “local-to-global” fusion strategy, which in turn remarkably strengthened the framework’s robustness and practical value. Within the MGRS framework, OP models allow for more risk tolerance, while PE models set stricter criteria. Lin [17] developed neighborhood-based multi-granulation rough sets by defining neighborhood relations, thus enabling the MGRS approach to directly handle numerical data. By systematically constructing a rigorous mathematical model for multi-granulation spaces, Yao [18] proposed a novel rough set theory that notably elevated computational efficiency. This progress overcame the static limitations of the fusion strategy in Qian’s PE model and expanded the framework to generalized neighborhood systems. However, many MGRS models are notably prone to noise and misclassification issues. As a result, researchers such as Chen [19], Feng [20], Shi [21], and Sun [22] have combined them with VPRSs to achieve better performance.
The concept of neighborhood systems (NSs), deriving from the geometric idea of “nearness” and fundamental in topology, has been widely applied in RST. In turn, remote neighborhood systems (RNSs)—derived from the geometric idea of “remoteness”—serve a foundational role in the theory of topological molecular lattices. Therefore, RNSs are essentially the symmetry concept of NSs. Notably, Sun [23] proposed rough set models based on RNSs. Additionally, the generalized neighborhood system (GNS) was initially proposed by Sieroiński and subsequently examined by Lin [24]. Li [25,26,27] further extended GNSs to fuzzy contexts by proposing the generalized fuzzy neighborhood system (GFNS) and investigated the properties of rough sets based on that framework. A recent study [28] proposed a rough set model via the GFNS and demonstrated that the model unified fuzzy relation, fuzzy covering, and fuzzy neighborhood within a single framework. Since RNSs are the dual of NSs, the GFNS naturally has its dual counterpart, the generalized fuzzy remote neighborhood system (GFRNS). Given this, Sun [29] investigated L-fuzzy rough sets via GFRNSs. However, there is currently limited research on MGRSs and VPRSs based on GFRNSs.

1.2. An Overview of the MADM and VIKOR Methods

MADM is a structured process designed to rank a limited number of explicit alternatives based on their performance across multiple criteria. It systematically transforms diverse and uncertain evaluation data into a comprehensive preference order, thereby aiding decision-makers in achieving a rational and optimal choice. Many researchers, such as Shi [30], Su [31], and Zhang [6], have proposed various methods to solve MADM problems.
Specifically, the VIKOR (VlseKriterijumska Optimizacija Kompromisno Resenje) method has been widely used in MADM problems concerning energy and environmental management, healthcare and pharmaceutical research, and other fields. Initially proposed by Opricovic and Tzeng, the well-known VIKOR approach seeks to determine CO solutions by balancing group utility maximization with individual regret minimization, while integrating specialist preferences. Yu [32] offered a thorough review of its academic development. Subsequently, its applicability was significantly extended to fuzzy sets, intuitionistic fuzzy sets, and other complex scenarios. An early extension of the VIKOR approach was developed by Sanayei [33], who incorporated fuzzy set theory to address supplier selection problems. Opricovic [34] later elaborated on the fuzzy VIKOR approach within its theoretical framework. Recent developments include its combination with fuzzy rough sets (FRSs) for the use of Pythagorean fuzzy sets to enhance network security [35] and an enhanced strategy for intuitionistic fuzzy environments using similarity measures [36]. It is noteworthy that Jiang [37] constructed variable precision covering rough sets via t-norms and integrated them with the VIKOR approach for the selection and ranking of medications for Alzheimer’s disease. However, a significant limitation in the aforementioned literature is that the weights for the VIKOR method are predominantly derived from expert opinions, resulting in inherent subjectivity.

1.3. A Brief Review on Semi-Grouping Functions

To address the issue that the associative law does not always hold for t-norms and t-conorms in practice, Bustince [38,39] introduced overlap and grouping functions by omitting that property. Overlap and grouping functions have been widely used in areas such as classification [40,41], decision-making [3], image processing [10], and fuzzy inference systems [42], demonstrating their practical value.
Recently, Zhang [43] proposed semi-overlap functions by removing the right-continuity property of overlap functions. Inspired by this, Shi [30] proposed semi-grouping functions by eliminating the left-continuity property of grouping functions. Semi-overlap functions and semi-grouping functions are essentially generalizations of left-continuous triangular norms and right-continuous triangular conorms, respectively. Theoretically speaking, compared with triangular conorms and grouping functions, semi-grouping functions remove the conditions of associativity and left-continuity, enabling the acquisition of aggregation operators with better performance. The relationships among t-conorms, grouping functions, and semi-grouping functions are illustrated in Figure 1. From an application perspective, Shi [30] and Li [44] applied semi-overlap (semi-grouping) functions to three-way decisions and proved their effectiveness in decision-making problems. Li [45] further extended them to intuitionistic semi-grouping functions and utilized them in decision-making, achieving favorable results. Therefore, from both theoretical and practical perspectives, this operator exhibits distinct advantages in decision analysis. However, research on applying semi-grouping functions and its residual minus to MGRS and VPRS is still relatively scarce.

1.4. Rationale and Novel Contributions of This Paper

Based on the literature reviews of VPRS, MGRS, MADM, and semi-grouping functions, this study was motivated by the following aspects:
(1) A limitation of many existing VPRS models is their failure to satisfy the (generalized) inclusion property, which in turn restricts their practical application. Therefore, it is necessary to construct a model that integrates the advantages of VPFRSs and MGRSs while satisfying the GIP. If we integrate VPRSs and MGFSs based on these two powerful tools—GFRNSs and semi-grouping functions—the resulting new model will possess broader theoretical foundations and is expected to demonstrate outstanding performance in decision-making applications.
(2) When computing the group utility and individual regret values within VIKOR, their weights are typically provided by experts based on experience, exhibiting strong subjectivity. In contrast, the upper and lower approximations based on FRSs can determine two weights [21]. These weights are derived from the data themselves, overcoming the limitations of experience-based approaches. However, a single weight is either overly conservative (relying solely on the lower approximation) or overly aggressive (relying solely on the upper approximation). Therefore, it is necessary to create a CO weight through linear combination that lies between “absolute certainty” and “maximum possibility”, resulting in a more comprehensive and robust decision-making process.
(3) Existing VIKOR methods still suffer from certain drawbacks in experimental tests. The majority of experiments only incorporate a few to dozens of samples, making it impossible to verify the continued effectiveness, stability, and reliability of the novel methods with an increase in sample size. Hence, it is essential to perform experiments on large datasets and design appropriate methods and algorithms to evaluate the above-mentioned performance.
The main contributions of this work can be summarized as follows:
(1) Theoretical contributions: We propose an OP, PE, and CO variable precision fuzzy rough set (OPCAPFRS) based on GFRNSs via semi-grouping functions. We verify that the OPCAPFRS can be regarded as a generalization of the models proposed by Yao [46] and Sun [29]. Subsequently, we investigate the basic properties of the OPCAPFRS and prove that it possesses the GIP. Finally, we propose two concepts of accuracy measure based on the OPCAPFRS and investigate their basic properties.
(2) Through the defined upper (lower) accuracy measure, we define two types of attribute weights and perform their linear combination. Subsequently, by integrating the VIKOR method with the OPCAPFRS model, we design a corresponding algorithm that introduces a novel decision-making approach for solving MADM problems. We test the new decision-making method on the Computer Hardware dataset and compare it with other decision-making methods. Experimental results demonstrate that our method is effective and reasonable compared to other decision-making approaches.
(3) We conduct a parameter analysis to test the impact of various variables in the input algorithm on experimental results and evaluate the algorithm on large-scale datasets. Overall, while rankings show some variations when different variables are changed, the optimal solution remains unchanged, demonstrating the sensitivity and stability of the new method. Test results indicate that the proposed method achieves complete ranking on large datasets, further proving that the proposed approach is efficacious, reliable, and robust.
This paper is structured as follows: In Section 2, we survey some key fundamental concepts as prerequisites. Section 3 introduces the OPCAPFRS based on GFRNSs via semi-grouping functions, with their fundamental properties investigated. Subsequently, Section 4 integrates the proposed models with the VIKOR method to formulate a novel approach for addressing an MADM problem, specifically the selection of an optimal CPU. The validity and superiority of the proposed methodology are demonstrated in Section 5 through a comparative analysis against existing decision-making methods. Further analyses on stability and sensitivity are conducted via a parametric analysis in Section 6. In Section 7, the robustness of the proposed methods and their applicability to large-scale datasets are subsequently validated through an experimental analysis. Section 8 concludes the paper. The mind map of this paper is shown in Figure 2. The abbreviations for frequently used terms are summarized in Table 1 to improve readability.

2. Preliminaries

For readers’ convenience, we provide a brief review of fundamental definitions and concepts used across the entire paper.
We first introduce the necessary concepts and notation. Consider a non-empty finite universe U .
(1) The power set of U is denoted by 2 U .
(2) A fuzzy set A in U is a mapping A : U [ 0 , 1 ] ; the collection of all fuzzy sets is denoted by [ 0 , 1 ] U .
(3) The characteristic function of a subset U is denoted by χ .
(4) For A , B [ 0 , 1 ] U , the relation A B holds if A ( o ) B ( o ) for every o U .
(5) The cardinality of a fuzzy set A is defined as | A | = o U A ( o ) .
Definition 1
([29]). A mapping RN : U 2 [ 0 , 1 ] U qualifies as a GFRNS operator on U provided that RN ( o ) is non-empty for every o U . Here, RN ( o ) denotes the GFRNS of o, with each K RN ( o ) representing a fuzzy remote neighborhood of o.
Definition 2
([29]). U , RN is called a GFRNS approximation space.
Definition 3
([30]). A binary operator : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] is called a semi-grouping function if ζ , ω [ 0 , 1 ] and any { ω i i Γ } [ 0 , 1 ] , the following conditions hold:
(SG1) ζ ω = ω ζ ;
(SG2) if ζ = ω = 0 , then ζ ω = 0 ;
(SG3) if ζ = 1 or ω = 1 , then ζ ω = 1 ;
(SG4)is increasing;
(SG5)is right-continuous, i.e., ζ i Γ ω i = i Γ ζ ω i .
Moreover, we say thatis
(SG6) deflationary ζ [ 0 , 1 ] , ζ 0 ζ ;
(SG7) inflationary ζ [ 0 , 1 ] , ζ 0 ζ ;
(SG8) associative ζ , ω , z [ 0 , 1 ] , ( ζ ω ) z = ζ ( ω z ) .
For all ζ , ω , z [ 0 , 1 ] , the function : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] determined by ζ ω = { z [ 0 , 1 ] ζ z ω } is called a residual minus of ⊕. We can easily see that ζ ω z ζ ω z .
For ζ [ 0 , 1 ] , set n ( ζ ) = 1 ζ , then the condition n ( n ( ζ ) ) = ζ is called the double negation law ( DN , for short).
Table 2 show some examples of semi-grouping functions and corresponding residual minus. In Table 2, all functions ⊕ satisfy ( SG 1 ) ( SG 5 ) . In particular, we indicate whether these functions satisfy SG6, SG7, SG8, or DN.
For ⊕ and ⊖, we have some of the following properties.
Lemma 1
([30]). For all ζ , ω , z , ζ j , ω k [ 0 , 1 ] , the following statements hold:
(1) 0 ζ = 0 ;
(2) ( ζ ω ) ω ζ ( ζ ω ) ω ;
(3) j Γ ζ j k Λ ω k = j Γ k Λ ( ζ j ω k ) ;
(4) if ω z , then ω ζ z ζ and ζ z ζ ω ;
(5) n ( i Γ ζ i ) = i Γ n ( ζ i ) ;
(6) ( ζ ω ) z ( ζ z ) ω .
If ⊖ satisfies Lemma 1(4), then ⊖ is called monotonic increasing.
Lemma 2
([30]). For all ζ , ω , z , ζ i [ 0 , 1 ] , the following statements hold:
(1) ifsatisfies (SG6) and (SG7), then ζ 0 = ζ ;
(2) ifsatisfies (SG8), then ( ζ ω ) z = ( ζ z ) ω = ζ ( ω z ) ;
(3) ifsatisfies (SG8), then n ( ζ ω ) = n ( ζ ) ω = n ( ω ) ζ ;
(4) if n satisfies DN andsatisfies (SG8), then ζ ω = n ( ω ) n ( ζ ) , n ( i Γ ζ i ) = i Γ n ( ζ i ) ;
(5) if n satisfies DN andsatisfies (SG6), (SG7), and (SG8), then n ( ζ ω ) = n ( ζ ) ω .
For a , α , β [ 0 , 1 ] , a , α , β also denotes the constant fuzzy set with the value a , α , β .
For all A , B [ 0 , 1 ] U and { , , , } , we denote ( A B ) by ζ U , A B ( ζ ) = A ( ζ ) B ( ζ ) .

3. OPCAPFRS Based on GFRNS

Within this part, we introduce optimistic (OP), pessimistic (PE), and compromise (CO) variable precision fuzzy rough sets (OPCAPFRSs) by means of GFRNSs and explore their inherent characteristics. In particular, we demonstrate that this model fulfills a GIP. By virtue of this property, we can formulate two innovative accuracy measures that are pivotal to later analytical applications.

3.1. OP, PE, and CO Variable Precision Fuzzy Rough Set (OPCAPFRS)

In this subsection, we introduce the OPCAPFRS based on the GFRNS via semi-grouping functions. Additionally, we demonstrate that some rough set models can be regarded as special cases of OPCAPFRSs.
Definition 4.
For a GFRNS approximation space U , RN and α , β [ 0 , 1 ] :
(1) The OP fuzzy upper and lower approximation operators O R N ¯ α , O R N ̲ β are defined by the following: for all A [ 0 , 1 ] U , o U ,
O R N ¯ α ( A ) ( o ) = K RN ( o ) m U α A ( m ) K ( m ) ,
O R N ̲ β ( A ) ( o ) = K RN ( o ) m U β A ( m ) K ( m ) .
The pair O R N ¯ α ( A ) , O R N ̲ β ( A ) is called the OP ( α , β ) -variable precision fuzzy rough set of A via the GFRNS. (OAPFRS)
(2) The PE fuzzy upper and lower approximation operators P R N ¯ α , P R N ̲ β are defined by the following: for all A [ 0 , 1 ] U , o U ,
P R N ¯ α ( A ) ( o ) = K RN ( o ) m U α A ( m ) K ( m ) ,
P R N ̲ β ( A ) ( o ) = K RN ( o ) m U β A ( m ) K ( m ) .
The pair P R N ¯ α ( A ) , P R N ̲ β ( A ) is called the PE ( α , β ) -variable precision fuzzy rough set of A via the GFRNS. (PAPFRS)
(3) For η [ 0 , 1 ] , the CO fuzzy upper and lower approximation operators C R N ¯ α , C R N ̲ β are defined by the following: for all A [ 0 , 1 ] U ,
C R N ¯ α ( A ) = η O R N ¯ α ( A ) + ( 1 η ) P R N ¯ α ( A ) ,
C R N ̲ β ( A ) = η O R N ̲ β ( A ) + ( 1 η ) P R N ̲ β ( A ) .
The pair C R N ¯ α ( A ) , C R N ̲ β ( A ) is called the CO ( α , β ) -variable precision fuzzy rough set of A via the GFRNS. (CAPFRS)
The three models mentioned above are collectively referred to as the OPCAPFRS.
Remark 1.
(1) In practical scenarios such as MADM and data classification, three core issues often coexist: (i) diversity of perspectives (e.g., evaluating a product requires considering multiple attribute dimensions such as “price, performance, and reputation” each corresponding to a distinct “cognitive perspective/granularity”); (ii) data fuzziness (e.g., subjective evaluations like “good performance” or “moderate price” lack clear boundaries, and continuous attributes also involve fuzzy partitions); and (iii) tolerance for error (e.g., minor data noise or outliers in certain attributes should not invalidate overall classification results, necessitating some allowance for classification errors).
Existing models struggle to address all three simultaneously: the MGRS only integrate multi-granular information but lack error tolerance mechanisms; the VPRS allow classification errors only under single granularity and cannot handle fuzziness or multi-perspective scenarios; while traditional FRSs do not support multi-granularity or error tolerance. To address these limitations, this paper proposes the OPCAPFRS. The core objective is to achieve precise modeling and robust decision-making for complex uncertain data by introducing error tolerance mechanisms and fuzzification within multi-granular (OP, PE, and CO) perspectives. The comparison of properties of different RST models is shown in Table 3.
(2) The relationships among the three fuzzy rough approximation operators are as follows. For all A [ 0 , 1 ] U ,
(i) O R N ¯ α ( A ) C R N ¯ α ( A ) P R N ¯ α ( A ) , P R N ̲ β ( A ) C R N ̲ β ( A ) O R N ̲ β ( A ) .
(ii) if η = 0 , then C R N ¯ α ( A ) = P R N ¯ α ( A ) , C R N ̲ β ( A ) = P R N ̲ β ( A ) .
(iii) if η = 1 , then C R N ¯ α ( A ) = O R N ¯ α ( A ) , C R N ̲ β ( A ) = O R N ̲ β ( A ) .
(3) If RN collapses to one fuzzy remote neighborhood, then unique element K RN ( o ) is simply expressed as K ( o ) . Hence,
O R N ¯ α ( A ) ( o ) = P R N ¯ α ( A ) ( o ) = C R N ¯ α ( A ) ( o ) = m U α A ( m ) K ( o ) ( m ) ,
O R N ̲ β ( A ) ( o ) = P R N ̲ β ( A ) ( o ) = C R N ̲ β ( A ) ( o ) = m U β A ( m ) K ( o ) ( m ) .
The subsequent example elucidates Definition 4.
Example 1.
Let U = { o 1 , o 2 , o 3 } and RN ( o ) be the GFRNS determined by RN ( o 1 ) = { 0.3 o 1 + 0.7 o 2 + 0.1 o 3 } , RN ( o 2 ) = { 1 o 1 + 0.2 o 2 + 0.8 o 3 , 0.8 o 1 + 0.8 o 2 + 0.9 o 3 } , RN ( o 3 ) = { 0.2 o 1 + 0.9 o 2 + 0.5 o 3 , 0.5 o 1 + 0.1 o 2 + 0.9 o 3 , 0.1 o 1 + 0.5 o 2 + 0.6 o 3 } .
Let A = 0.7 o 1 + 0.2 o 2 + 0.6 o 3 . (i) If = M , = M , α = 0.3 , β = 0.7 , and η = 0.5 , then
O R N ¯ α ( A ) = 0.3 o 1 + 0 o 2 + 0.2 o 3 , O R N ̲ β ( A ) = 0.7 o 1 + 0.8 o 2 + 0.7 o 3 .
P R N ¯ α ( A ) = 0.2 o 1 + 0.2 o 2 + 0 o 3 , P R N ̲ β ( A ) = 0.7 o 1 + 0.7 o 2 + 0.7 o 3 .
C R N ¯ α ( A ) = 0.25 o 1 + 0.1 o 2 + 0.1 o 3 , C R N ̲ β ( A ) = 0.7 o 1 + 0.75 o 2 + 0.7 o 3 .
(ii) If = M , = M , α = 0.8 , β = 0.2 , and η = 0.5 , then
O R N ¯ α ( A ) = 0.7 o 1 + 0 o 2 + 0.7 o 3 , O R N ̲ β ( A ) = 0.6 o 1 + 0.8 o 2 + 0.6 o 3 .
P R N ¯ α ( A ) = 0.2 o 1 + 0.2 o 2 + 0 o 3 , P R N ̲ β ( A ) = 0.7 o 1 + 0.7 o 2 + 0.7 o 3 .
C R N ¯ α ( A ) = 0.45 o 1 + 0.1 o 2 + 0.35 o 3 , C R N ̲ β ( A ) = 0.65 o 1 + 0.75 o 2 + 0.65 o 3 .
(iii) If = P , = P , α = 0.3 , β = 0.7 , and η = 0.5 , then
O R N ¯ α ( A ) = 0.22 o 1 + 0 o 2 + 0.11 o 3 , O R N ̲ β ( A ) = 0.73 o 1 + 0.94 o 2 + 0.76 o 3 .
P R N ¯ α ( A ) = 0.2 o 1 + 0.2 o 2 + 0 o 3 , P R N ̲ β ( A ) = 0.7 o 1 + 0.7 o 2 + 0.88 o 3 .
C R N ¯ α ( A ) = 0.211 o 1 + 0.1 o 2 + 0.056 o 3 , C R N ̲ β ( A ) = 0.715 o 1 + 0.82 o 2 + 0.82 o 3 .
Remark 2.
(1) It is widely recognized that each fuzzy metric d : U × U [ 0 , 1 ] determines a fuzzy remote neighborhood operator by o U , K d ( o ) = { d ( o , ) } . Taking as M and taking as M , Yao [46] defined the following: o U , A [ 0 , 1 ] U ,
d ¯ ( A ) ( o ) = m U A ( m ) d ( m , o ) ,
d ̲ ( A ) ( o ) = m U A ( m ) d ( o , m ) .
This indicates that Yao’s model corresponds to the OPCAPFRS defined by a fuzzy metric with  α = 1  and  β = 0 .
(2) When [ 0 , 1 ] U degenerates into { 0 , 1 } U , then GFRNS operator RN : U 2 [ 0 , 1 ] U degenerates into generalized RNS operator NF : U 2 2 U . Sun’s [23] OP rough set based on a generalized remote neighborhood system is a special OAPFRS. Sun defined the following: o U , A [ 0 , 1 ] U ,
O N F ¯ ( A ) = { o U K NF ( o ) , A K } ,
O N F ̲ ( A ) = { o U K NF ( o ) , n ( A ) K } .
Each NF generates a GFRNS operator RN NF ( o ) = { χ A A NF ( o ) } . Then,
O N F ¯ ( A ) ( o ) = K RN NF ( o ) m U A ( m ) K ( m ) = O R N ¯ NF ( A ) ( o ) ,
O N F ̲ ( A ) ( o ) = K RN NF ( o ) m U A ( m ) K ( m ) = O R N ̲ NF ( A ) ( o ) .
So Sun’s model is the OAPFRS of A based on a generalized RNS with α = 1 and β = 0 .

3.2. Basic Properties of OPCAPFRS

In this subsection, we explore the basic properties of the OPCAPFRS and its characterization. More precisely, we demonstrate that an OPCAPFRS meets a GIP, and leveraging this property, we propose two novel accuracy measures.
Let U α , L β ( α , β [ 0 , 1 ] ) : [ 0 , 1 ] U [ 0 , 1 ] U be mappings. Then, A , B , A i ( i I ) [ 0 , 1 ] U , a [ 0 , 1 ] , we denote
( U 1 )   U α ( 0 ) = 0 .
( L 1 )   L β ( 1 ) = 1 .
( U 2 )   U α ( 1 ) = α .
( L 2 )   L β ( 0 ) = β .
( U 3 )   A B U α ( A ) U α ( B ) .
( L 3 )   A B L α ( A ) L α ( B ) .
( U 4 a )   i I U α ( A i ) U α ( i I A i ) .
( L 4 a )   L β ( i I A i ) i I L β ( A i ) .
( U 4 b )   U α ( i I A i ) = i I U α ( A i ) .
( L 4 b )   L β ( i I A i ) = i I L β ( A i ) .
( U 5 )   U α ( i I A i ) i I U α ( A i ) .
( L 5 )   i I L β ( A i ) L β ( i I A i ) .
( U 6 )   α A U α ( A ) .
( L 6 )   L β ( A ) β A .
( U 7 )   U α ( A ) = U α ( α A ) .
( L 7 )   L β ( A ) = L β ( β A ) .
( U 8 )   α U α ( A ) = U α ( A ) .
( L 8 )   β L β ( A ) = L β ( A ) .
( U 9 )   U α ( A ) a U α ( A a ) .
( L 9 )   L β ( A ) a L β ( A a ) .
Moreover, if β = n ( α ) , then:
( L D R 1 )   U α ( A ) = n L β n ( A )
( L D R 2 )   L β ( A ) = n U α n ( A ) .
( L D 2 )   U α ( A ) n L β n ( A ) and L β ( A ) n U α n ( A ) .
The proposition stated below presents the fundamental characteristics of the OAPFRS.
Proposition 1.
Let U , RN be a GFRNS approximation space and U α = O R N ¯ α , L β = O R N ̲ β .
(1) U α fulfills (U1), (U3), (U4a), (U5), (U7), and (U8) and L β fulfills (L1), (L3), (L4a), (L5), (L7), and (L8).
(2) if n satisfies DN andsatisfies (SG6), (SG7), and (SG8), then U α and L β fulfills (LDR1).
(3) if satisfies (SG8), then L β fulfills (L9) and (LDR2).
(4) if satisfies (SG8), then U α fulfills (U9).
Proof. 
(1) (U1) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
U α ( 0 ) ( o ) = K RN ( o ) m U α 0 ( m ) K ( m ) = K RN ( o ) m U 0 K ( m ) = Lemma 1 ( 1 ) 0 = 0 ( o ) .
(U8) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
α U α ( A ) ( o ) = α K RN ( o ) m U α A ( m ) K ( m ) = K RN ( o ) m U α α A ( m ) K ( m ) Lemma 1 ( 6 ) K RN ( o ) m U α α A ( m ) K ( m ) = U α ( A ) ( o ) .
(L1) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
L β ( 1 ) ( o ) = K RN ( o ) m U β 1 ( m ) K ( m ) = K RN ( o ) m U 1 K ( m ) = ( SG 3 ) 1 = 1 ( o ) .
(L8) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
β L β ( A ) ( o ) = β K RN ( o ) m U β A ( m ) K ( m ) = K RN ( o ) m U β β A ( m ) K ( m ) = K RN ( o ) m U β β A ( m ) K ( m ) = K RN ( o ) m U β A ( m ) K ( m ) = L β ( A ) ( o ) .
(2) When α , β [ 0 , 1 ] , β = n ( α ) , A [ 0 , 1 ] U , and o U ,
n L β ( n A ) ( o ) = n K RN ( o ) m U n ( α ) n ( A ( m ) ) K ( m ) = n K RN ( o ) m U n α A ( m ) K ( m ) = Lemma 2 ( 5 ) n K RN ( o ) m U n α A ( m ) K ( m ) = n n K RN ( o ) m U α A ( m ) K ( m ) = K RN ( o ) m U α A ( m ) K ( m ) = U α ( A ) ( o ) .
(3) (L9) Taking A [ 0 , 1 ] U and o U , a , β [ 0 , 1 ] ,
( L β ( A ) a ) ( o ) = K RN ( o ) m U β A ( m ) K ( m ) a = K RN ( o ) m U β A ( m ) K ( m ) a = ( SG 8 ) K RN ( o ) m U β A ( m ) a K ( m ) K RN ( o ) m U β A ( m ) a K ( m ) = L β ( A a ) ( o ) .
When β = 0 , we get L β ( A ) a = L β ( A a ) .
(LDR2) When α , β [ 0 , 1 ] , α = n ( β ) , A [ 0 , 1 ] U , and o U ,
n U α ( n A ) ( o ) = n K RN ( o ) m U n ( β ) n ( A ( m ) ) K ( m ) = n K RN ( o ) m U n β A ( m ) K ( m ) = Lemma 2 ( 3 ) n K RN ( o ) m U n β A ( m ) K ( m ) = n K RN ( o ) m U n β A ( m ) K ( m ) = n n K RN ( o ) m U β A ( m ) K ( m ) = K RN ( o ) m U β A ( m ) K ( m ) = L β ( A ) ( o ) .
(4) Taking A [ 0 , 1 ] U and o U , a [ 0 , 1 ] ,
( U α ( A ) a ) ( o ) = K RN ( o ) m U α A ( m ) K ( m ) a = Lemma 1 ( 3 ) K RN ( o ) m U α A ( m ) K ( m ) a = ( SG 8 ) K RN ( o ) m U α A ( m ) a K ( m ) Lemma 1 ( 6 ) K RN ( o ) m U α A ( m ) a K ( m ) = U α ( A a ) ( o ) .
Moreover, when α = 1 , we get U α ( A ) a = U α ( A a ) . □
The subsequent proposition presents the fundamental characteristics of the PAPFRS.
Proposition 2.
Let U , RN be a GFRNS approximation space and U α = P R N ¯ α , L β = P R N ̲ β . Then, U α  and  L β  meet (U4b), (L4b), and (1)–(4) of Proposition 1.
Proof. 
(U4b) Taking A i ( i I ) [ 0 , 1 ] U and o U ,
U α ( i I A i ) ( o ) = K RN ( o ) m U α ( i I A i ) ( m ) K ( m ) = K RN ( o ) m U i I α A i ( m ) K ( m ) = Lemma 1 ( 3 ) i I K RN ( o ) m U α A i ( m ) K ( m ) = i I U α ( A i ) ( o ) .
(L4b) Taking A i ( i I ) [ 0 , 1 ] U and o U ,
L β ( i I A i ) ( o ) = K RN ( o ) m U β ( i I A i ) ( m ) K ( m ) = K RN ( o ) m U i I β A i ( m ) K ( m ) = SG 5 i I K RN ( o ) m U β A i ( m ) K ( m ) = i I L β ( A i ) ( o ) .
Drawing on Propositions 1 and 2, we present the following corollary pertaining to the properties of the CAPFRS.
Corollary 1.
Let U , RN be a GFRNS approximation space and U α = P R N ¯ α , L β = P R N ̲ β . Then, U α , L β fulfill (1)–(4) in Proposition 1.
In classical RST and their fuzzy counterparts, antiserial and antireflexive conditions are often considered for the derivation of additional properties. For the OPCAPFRS, similar conditions can also be defined. Subsequently, we explore the corresponding conditions applicable to OPCAPFRS.
Definition 5.
Let U , RN be a GFRNS approximation space.
(1) If o U , K RN ( o ) , m U K ( m ) = 0 , then RN is called O R N -antiserial.
(2) If o U , K RN ( o ) m U K ( m ) = 0 , then RN is called P R N -antiserial.
Remark 3.
(1) According to Definition 5, P R N -antiserial implies O R N -antiserial, which means the P R N -antiserial condition is stronger than the O R N -antiserial.
(2) Let RN d ( o ) = { d ( o , ) } , with d denoting a fuzzy metric. The GFRNS RN d satisfies the O R N -antiserial condition if and only if it meets the P R N -antiserial condition, which in turn holds if and only if for every o U , m U d ( o , m ) = 0 . In other words, d itself is antiserial. Hence, the O R N -antiserial and P R N -antiserial conditions applicable to the GFRNS serve as natural extensions of the corresponding conditions for the fuzzy metric d.
The proposition below presents additional characteristics of the OAPFRS in the case where RN is OFN-antiserial.
Proposition 3.
Let U , RN be an O R N -antiserial GFRNS approximation space and U α = O R N ¯ α , L β = O R N ̲ β .
(1) if satisfies (SG6) and (SG7), then U α fulfills (U2).
(2) if satisfies (SG6) and (SG7), then L β fulfills (L2).
Proof. 
(1) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
U α ( 1 ) ( o ) = K RN ( o ) m U α 1 ( m ) K ( m ) = K RN ( o ) m U α K ( m ) = Lemma 1 ( 3 ) K RN ( o ) α m U K ( m ) = ORN-antiserial K RN ( o ) α 0 = α 0 = Lemma 2 ( 1 ) , SG 6 , SG 7 α .
(2) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
L β ( 0 ) ( o ) = K RN ( o ) m U β 0 ( m ) K ( m ) = K RN ( o ) m U β K ( m ) = SG 5 K RN ( o ) β m U K ( m ) = ORN-antiserial K RN ( o ) β 0 = β 0 = SG 6 , SG 7 β .
The proposition below presents additional characteristics of PAPFRSs in the case where RN is PFN-antiserial.
Proposition 4.
Let U , RN be a P R N -antiserial GFRNS approximation space and U α = P R N ¯ α , L β = P R N ̲ β . If satisfies (SG6) and (SG7), then U α and L β fulfills (1), (2) in Proposition 3.
Proof. 
(U2) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
U α ( 1 ) ( o ) = K RN ( o ) m U α 1 ( m ) K ( m ) = K RN ( o ) m U α K ( m ) = Lemma 1 ( 3 ) α K RN ( o ) m U K ( m ) = PRN-antiserial α 0 = Lemma 2 ( 1 ) , SG 6 , SG 7 α .
(L2) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
L β ( 0 ) ( o ) = K RN ( o ) m U β 0 ( m ) K ( m ) = K RN ( o ) m U β K ( m ) = SG 5 β K RN ( o ) m U K ( m ) = PRN-antiserial β 0 = SG 6 , SG 7 β .
Corollary 2.
Let U , RN be an O R N -antiserial GFRNS approximation space and U α = C R N ¯ α , L β = C R N ̲ β . Then, U α , L β meet (1), (2) in Proposition 3.
Within the approximation space ( U , d ) equipped with an antireflexive metric d, fuzzy rough sets inherently satisfy the inclusion property: A [ 0 , 1 ] U , d ̲ ( A ) A d ¯ ( A )  [4]. This fundamental characteristic is crucial for rough set applications, as it enables the definition of accuracy measure, a concept widely utilized across diverse domains [22,30]. To ensure this inclusion property holds under variable precision settings for OPCAPFRS models, we impose the following antireflexive condition.
Definition 6.
Let U , RN be a GFRNS approximation space.
(1) If o U , K RN ( o ) , K ( o ) = 0 , then RN is called O R N -antireflexive.
(2) If o U , K RN ( o ) K ( o ) = 0 , then RN is called P R N -antireflexive.
Remark 4.
(1) According to Definition 6, P R N -antireflexive implies O R N -antiserial, which means the P R N -antireflexive condition is stronger than the O R N -antireflexive.
(2) Let RN d ( o ) = { d ( o , ) } , with d denoting a fuzzy metric. The GFRNS RN d satisfies the O R N -antireflexive condition if and only if it meets the P R N -antireflexive condition, which in turn holds if and only if, for every o U , d ( o , m ) = 0 . In other words, d itself is antireflexive. Hence, the O R N -antireflexive and P R N -antireflexive conditions applicable to the GFRNS serve as natural extensions of the corresponding conditions for the fuzzy metric d.
(3) In data processing, adopting the variable precision idea to calculate the lower and upper approximations of rough sets indeed brings convenience, but it also entails certain risks. Lack of comparability is one such risk. Overemphasizing either variable precision or perfect mathematical properties is not an optimal choice; however, certain basic principles and properties should be retained. The requirement of anti-reflexivity in this paper aims to achieve methodological balance and compatibility. As the proposed method is a rough set-based approach, comparability is one of its fundamental properties, hence the inclusion of the anti-reflexivity condition.
We now demonstrate that under variable precision settings, both upper and lower approximations fulfill generalized inclusion properties. Specifically, the following proposition establishes that when RN exhibits ORN-antireflexivity, the OAPFRS model guarantees properties (U6) and (L6) for upper and lower approximations, respectively.
Proposition 5.
Let U , RN be an O R N -antireflexive GFRNS approximation space and U α = O R N ¯ α , L β = O R N ̲ β .
(1) if satisfies (SG6) and (SG7), then U α fulfills (U6).
(2) if satisfies (SG6) and (SG7), then L β fulfills (L6).
Proof. 
(1) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
U α ( A ) ( o ) = K RN ( o ) m U α A ( m ) K ( m ) K RN ( o ) α A ( o ) K ( o ) = ORN-antireflexive α A ( o ) 0 = Lemma 2 ( 1 ) , SG 6 , SG 7 α A ( o ) .
(2) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
L β ( A ) ( o ) = K RN ( o ) m U β A ( m ) K ( m ) K RN ( o ) β A ( o ) K ( o ) = ORN-antireflexive β A ( o ) 0 = SG 6 , SG 7 β A ( o ) .
The proposition beneath illustrates that when RN is ORN-antireflexive, the PAPFRS meets the generalized inclusion properties.
Proposition 6.
Let U , RN be a P R N -antireflexive GFRNS approximation space and U α = P R N ¯ α , L β = P R N ̲ β .
(1) if satisfies (SG6) and (SG7), then U α fulfills (U6).
(2) if satisfies (SG6) and (SG7), then L β fulfills (L6).
Proof. 
(1) Taking A [ 0 , 1 ] U and o U , α [ 0 , 1 ] ,
U α ( A ) ( o ) = K RN ( o ) m U α A ( m ) K ( m ) K RN ( o ) α A ( o ) K ( o ) = Lemma 1 ( 3 ) α A ( o ) K RN ( o ) K ( o ) = PRN-antireflexive α A ( o ) 0 = Lemma 2 ( 1 ) , SG 6 , SG 7 α A ( o ) .
(2) Taking A [ 0 , 1 ] U and o U , β [ 0 , 1 ] ,
L β ( A ) ( o ) = K RN ( o ) m U β A ( m ) K ( m ) K RN ( o ) β A ( o ) K ( o ) = SG 5 β A ( o ) K RN ( o ) K ( o ) = PRN-antireflexive β A ( o ) 0 = SG 6 , SG 7 β A ( o ) .
The next corollary is derived by making use of Propositions 5 and 6.
Corollary 3.
Let U , RN be an O R N -antireflexive GFRNS approximation space and U α = C R N ¯ α , L β = C R N ̲ β . Then, U α , L β fulfill(1), (2) in Proposition 6.
Propositions 5 and 6, and Corollary 3 collectively confirm that L ̲ β ( A ) A U ¯ α ( A ) holds if α = 1 and β = 0 , thereby satisfying the inclusion property. However, the subsequent example demonstrates that this property fails to hold for arbitrary values of α and β .
Example 2.
Let U = { o 1 , o 2 , o 3 } and RN be the antireflexive fuzzy remote neighborhood presented in Table 4. Take = M , = M , α = 0.3 , β = 0.7 , η = 0.5 , and
A = 0.3 o 1 + 0.3 o 2 + 0.4 o 3 .
Then, we have
U α ( A ) = O R N ¯ α ( A ) = P R N ¯ α ( A ) = C R N ¯ α ( A ) = 0.3 o 1 + 0.3 o 2 + 0.3 o 3 ,
L β ( A ) = O R N ̲ β ( A ) = P R N ̲ β ( A ) = C R N ̲ β ( A ) = 0.7 o 1 + 0.7 o 2 + 0.7 o 3 .
Hence, L β ( A ) A U α ( A ) does not hold here.
Example 2 reveals the inadequacy of defining an accuracy measure through L β ( A ) U α ( A ) , as this ratio may exceed unity. Nevertheless, the GIP enables two distinct accuracy measures: one derived from the lower approximation and another from the upper approximation. We exemplify these measures using the OAPFRS model below.
Definition 7.
Let A [ 0 , 1 ] U and α [ 0 , 1 ] . The upper accuracy measure U P P ( A ) is expressed as
U P P ( A ) = | α A | | O R N ¯ α ( A ) | .
Definition 8.
Let A [ 0 , 1 ] U and β [ 0 , 1 ] . The lower accuracy measure L P P ( A ) is expressed as
L P P ( A ) = | O R N ̲ β ( A ) | | β A | .
Next, we investigate certain properties of the two accuracy measures. We specify that 0 0 = 1 . The following two propositions are valid based on Definitions 7 and 8.
Proposition 7.
Let A [ 0 , 1 ] U and α [ 0 , 1 ] . The following statements hold:
(1) U P P ( A ) = 0 | α A | = 0 α A = .
(2) 0 < U P P ( A ) < 1 | α A | < | O R N ¯ α ( A ) | α A O R N ¯ α ( A ) .
(3) U P P ( A ) = 1 | α A | = | O R N ¯ α ( A ) | α A = O R N ¯ α ( A ) .
Proposition 8.
Let A [ 0 , 1 ] U and β [ 0 , 1 ] . The following statements hold:
(1) L P P ( A ) = 0 | O R N ̲ β ( A ) | = 0 O R N ̲ β ( A ) = .
(2) 0 < L P P ( A ) < 1 | O R N ̲ β ( A ) | < | β A | O R N ̲ β ( A ) β A .
(3) L P P ( A ) = 1 | β A | = | O R N ̲ β ( A ) | β A = O R N ̲ β ( A ) .
The proposition below demonstrates how the change of parameters influences rough sets and accuracy measures.
Proposition 9.
Let U , RN be a GFRNS approximation space and α , β , η [ 0 , 1 ] .
(1) Let U α = O R N ¯ α or P R N ¯ α or C R N ¯ α ; then, U α is monotonic increasing with respect to α .
(2) Let L β = O R N ̲ β or P R N ̲ β or C R N ̲ β ; then, L β is monotonic increasing with respect to β .
(3) C R N ¯ α is monotonic decreasing with respect to η ; C R N ̲ α is monotonic increasing with respect to η .
(4) U P P and L P P are monotonic increasing with respect to η .
Proof. 
(1) Since ⊖ is monotonic increasing based on Lemma 1(4), by Definition 4, the conclusion is obvious.
(2) Since ⊕ is monotonic increasing based on Definition 3, by Definition 4, the conclusion is obvious.
(3) Taking A [ 0 , 1 ] U and o U , α , β , η [ 0 , 1 ] ,
C R N ¯ α ( A ) ( o ) = η O R N ¯ α ( A ) ( o ) + ( 1 η ) P R N ¯ α ( A ) ( o ) = η ( O R N ¯ α ( A ) P R N ¯ α ( A ) ( o ) ) + P R N ¯ α ( A ) ( o ) ,
C R N ̲ β ( A ) ( o ) = η O R N ̲ β ( A ) ( o ) + ( 1 η ) P R N ̲ β ( A ) ( o ) = η ( O R N ̲ β ( A ) P R N ̲ β ( A ) ( o ) ) + P R N ̲ β ( A ) ( o ) .
Furthermore, by Section 7.3, we have O R N ¯ α ( A ) P R N ¯ α ( A ) and P R N ̲ β ( A ) O R N ̲ β ( A ) . Hence, o U , we have O R N ¯ α ( A ) ( o ) P R N ¯ α ( A ) ( o ) 0 and O R N ̲ β ( A ) ( o ) P R N ̲ β ( A ) ( o ) 0 . Therefore, we can regard C R N ̲ β ( A ) ( o ) as an increasing function with η as the variable, and regard C R N ¯ α ( A ) ( o ) as a decreasing function with η as the variable. To sum up, as η increases, C R N ¯ α ( A ) shows an upward trend, while C R N ̲ β ( A ) shows a downward trend.
(4) Through (3), along with Definitions 7 and 8, (4) can be derived.    □

4. Application of OPCAPFRS Model and VIKOR Method to MADM Problems

In this section, the OPCAPFRS model is employed to investigate an MADM problem. Initially, we present a novel approach for determining the attribute weights, by utilizing the accuracy measures outlined in Definitions 7 and 8. Subsequently, an innovative methodology for addressing MADM issues is proposed, by means of the integration of the OPCAPFRS model and the VIKOR method.

4.1. Description of MADM Problem

This subsection presents an MADM problem characterized by real-life data.
Definition 9.
An MADM issue is defined as a 4-tuple ( U , A , D , W ), where
  • U = { o 1 , o 2 , , o p } denotes the alternatives;
  • A = { A 1 ˜ , A 2 ˜ , , A q ˜ } represents the attributes;
  • D = { A j ˜ ( o i ) = v i j | o i U , A j ˜ A ; i = 1 , 2 , , p ; j = 1 , 2 , , q } refers to the domain of attributes, where v i j represents the value of the alternative o i for the attribute A j ˜ ;
  • W = ( w 1 , w 2 , , w q ) is the weight vector, where w j [ 0 , 1 ] and j = 1 q w j = 1 .
The MADM decision matrix (see Table 5) can be obtained from the definition provided above.

4.2. Combining OPCAPFRS and VIKOR Method for MADM

In this subsection, we use a new methodology to address MADM problems, which combines the OPCAPFRS and the VIKOR method. We take the OAPFRS as an example to explain our method; the PE and compromised models are similar to this one.
(1) The fuzzy decision-making matrix (see Table 6) can be derived from Table 5 through a standardizing process:
If A j ˜ ( j = 1 , 2 , , q ) is a benefit attribute, then
V i j = v i j ( 1 i p v i j ) ( 1 i p v i j ) ( 1 i p v i j ) .
If A j ˜ ( j = 1 , 2 , , q ) is a cost attribute, then
V i j = ( 1 i p v i j ) v i j ( 1 i p v i j ) ( 1 i p v i j ) .
(2) Drawing on the data in Table 6 and stakeholder preference, we propose the construction of an antireflexive generalized fuzzy remote neighborhood system operator (GFRNSO, for short) RN A ( o i ) .
(3) Then, the OP fuzzy upper and lower approximation ( O R N ¯ α and O R N ̲ β ) of each attribute fuzzy set A j concerning A j ˜ can be computed via the given GFRNSO. The attribute fuzzy sets are expressed as:
A j = i = 1 p V i j o i , j { 1 , 2 , , q } .
(4) Depending on the fuzzy upper and lower accuracy measures, E u = { e j u | j = 1 , 2 , , q } and E l = { e j l | j = 1 , 2 , , q } can be calculated separately,
e j u = UPP ( A j ) j = 1 q UPP ( A j ) ,
e j l = LPP ( A j ) j = 1 q LPP ( A j ) .
In turn, the attribute weight vector W = ( w 1 , w 2 , , w q ) can be obtained by
w j = a e j u + ( 1 a ) e j l ,
where a is a parameter in [0,1], and j = 1 , 2 , , q .
(5) Based on our newly proposed OPCAPFRS model, we apply the OPCAPFRS–VIKOR method.
(i) The fuzzy positive (negative) ideal solution is directly obtained from Table 6
FPIS = ( P 1 , P 2 , , P q ) ,
FNIS = ( N 1 , N 2 , , N q ) ,
where P j = 1 i p V i j , N j = 1 i p V i j , j = 1 , 2 , , p .
(ii) With reference to the attribute weight vector W = ( w 1 , w 2 , , w q ) , we calculate the Group Utility Value S i ( i = 1 , 2 , , p ) and Individual Regret Value R i ( i = 1 , 2 , , p ) , respectively:
S i = j = 1 q w j P j V i j P j N j ;
R i = 1 j q w j P j V i j P j N j .
The Compromise Value Q i ( i = 1 , 2 , , p ) is shown below:
Q i = k S i 1 i p S i 1 i p S i 1 i p S i + ( 1 k ) R i 1 i p R i 1 i p R i 1 i p R i , k [ 0 , 1 ] .
(iii) Then, we construct three fuzzy sets, the maximum group utility fuzzy set T S , the minimum individual regret fuzzy set T R and the CO fuzzy set T Q , where
T S = S 1 o 1 + S 2 o 2 + + S p o p ,
T R = R 1 o 1 + R 2 o 2 + + R p o p ,
T Q = Q 1 o 1 + Q 2 o 2 + + Q p o p .
(iv) By Definition 4, we calculate the OP upper approximation O R N ¯ α ( S ) and the OP lower approximation O R N ̲ β ( S ) to the maximum group utility fuzzy set T S , the upper approximation O R N ¯ α ( R ) and the lower approximation O R N ̲ β ( R ) to the minimum individual regret fuzzy set T R , and the upper approximation O R N ¯ α ( Q ) and the lower approximation O R N ̲ β ( Q ) to the CO fuzzy set T Q .
(v) The sorting functions are given by:
C S = O R N ¯ α ( S ) O R N ̲ β ( S ) ,
C R = O R N ¯ α ( R ) O R N ̲ β ( R ) ,
C Q = O R N ¯ α ( Q ) O R N ̲ β ( Q ) .
C Q gives the ranking values.
With respect to the sorting functions provided above, we assume that the ranking result obtained by C Q in ascending order is o ( 1 ) , o ( 2 ) , , o ( p ) , and make the following judgments:
(a) C Q ( o ( 2 ) ) C Q ( o ( 1 ) ) 1 n 1 ;
(b) o ( 1 ) ranks first in both the C S ranking and the C R ranking.
If both conditions (a) and (b) are satisfied, then o ( 1 ) is the CO solution. If not, the following checks are performed:
(i) If only condition (b) is not satisfied, then the CO solutions are o ( 1 ) and o ( 2 ) ;
(ii) If condition (a) is not satisfied, then the CO solution consists of all o ( i ) ’s that satisfy C Q ( o ( i ) ) C Q ( o ( 1 ) ) < 1 n 1 .
Remark 5.
Based on the model characterization provided above, the reasonableness and benefits of our novel methodology are outlined below. To initiate, we put forward an unbiased method for calculating attribute weights by using approximate accuracies. Compared to outcomes derived from subjective weights in certain decision-making approaches, this method is more convincing. Furthermore, through integrating the VIKOR method with our OPCAPFRS model, the proposed decision-making method is capable of addressing some complex issues effectively. Also, it strikes a balance between objectivity and subjectivity, embracing preferences while preventing quality objects from being overlooked, thus facilitating flexible decision-making.

4.3. Algorithm Description

In light of the aforementioned method, we systematically outline our novel OPCAPFRS–VIKOR algorithm as follows. As in the previous section, we use  O R N ¯ α , O R N ̲ β  as an example in the algorithm description.
Input: The MADM matrix.
Output: The ranks of alternatives and the CO solution(s).
Step 1: Standardize the given matrix as per Formulae (1) and (2)
Step 2: Construct the GFRNSO on the basis of Table 6. Calculate the OP fuzzy upper and lower approximation ( O R N ¯ α and O R N ̲ β ) according to Definition 4.
Step 3: Calculate the attribute weight vector W = ( w 1 , w 2 , , w q ) by means of Definitions 7 and 8 and Formulae (4)–(6).
Step 4: Determine the fuzzy positive and negative ideal solution ( FPIS and FNIS ) in accordance with Formulae (7) and (8).
Step 5: Calculate the group utility value S , individual regret value R , and the CO value Q by Formulae (9), (10), and (11), respectively.
Step 6: Construct three fuzzy sets ( T S , T R , T Q ) corresponding to VIKOR indexes according to Formulae (12)–(14). Then, compute the corresponding upper and lower approximation O R N ¯ α ( S ) , O R N ̲ β ( S ) , O R N ¯ α ( R ) , O R N ̲ β ( R ) , O R N ¯ α ( Q ) , and O R N ̲ β ( Q ) , respectively, as per Definition 4.
Step 7: The sorting functions C S , C R , and C Q can be obtained directly by Formulae (15)–(17). Then, the rank and CO solution(s) are determined based on the sorting functions and the ranking values ( C Q ), respectively.
Algorithm 1 presents pseudo-code for the aforementioned decision-making approach.
Remark 6.
According to Table 6, we have p alternatives and q attributes. Thus, the algorithm complexity can be calculated (see Table 7).
Algorithm 1: The OPCAPFRS–VIKOR algorithm (taking the OAPFRS as an example).
Input: The MADM matrix, five parameters α , β , η , k , a , and semi-grouping function , .
Output: The ranks of alternatives and the CO solution(s).
1begin
Symmetry 18 00084 i001
34end

4.4. Real Example-Based Validation

To illustrate the high efficiency and real-world feasibility of our method, a typical dataset from the UCI database was employed https://archive.ics.uci.edu/dataset/29/computer+hardware (accessed on 29 November 2025).
Suppose a customer aims to acquire a high-performance CPU, and the choices are in the dataset. The original dataset contains 10 attributes, from which we selected six for analysis. Assume that A = { A 1 ˜ , A 2 ˜ , A 3 ˜ , A 4 ˜ , A 5 ˜ , A 6 ˜ } is the set of attributes, where A 1 ˜ represents the Maximum Main Memory (MMAX), A 2 ˜ represents the Minimum Main Memory (MMIN), A 3 ˜ represents the Machine Cycle Time (MYCT), A 4 ˜ represents the Cache Memory (CACH), A 5 ˜ represents the Minimum Channels (CHMIN), A 6 ˜ represents the Maximum Channels (CHMAX). Specifically, A 3 ˜ is the cost attribute, while the remaining ones are benefit attributes. To specify our new approach, we selected eight alternatives at random U = { o 1 , o 2 , o 3 , o 4 , o 5 , o 6 , o 7 , o 8 }. The data are shown in Table 8.
(1) Conduct a standardization process on the data in Table 8, resulting in Table 9.
(2) Based on Table 9, for each o i , we construct a GFRNSO RN A ( o i ) = { RN j ( o i ) | j = 1 , 2 , , 6 ; i = 1 , 2 , , 8 } , where
RN j ( o i ) = i = 1 8 ( V i j V k j ) ( V k j V i j ) o k , k { 1 , 2 , , 8 } .
Then, taking = B , = B , α = 0.8, β = 0.2, we calculate the upper and lower approximations of A j (see Table 10 and Table 11), where
A 1 = 0.167 o 1 + 0.167 o 2 + 0.333 o 3 + 0.333 o 4 + 0.0 o 5 + 0.333 o 6 + 0.333 o 7 + 1.0 o 8 ;
A 2 = 0.429 o 1 + 0.143 o 2 + 1.0 o 3 + 0.0 o 4 + 0.143 o 5 + 0.0 o 6 + 0.429 o 7 + 0.429 o 8 ;
A 3 = 1.0 o 1 + 0.807 o 2 + 0.965 o 3 + 0.0 o 4 + 0.895 o 5 + 0.789 o 6 + 1.0 o 7 + 0.965 o 8 ;
A 4 = 0.0 o 1 + 0.0 o 2 + 1.0 o 3 + 0.0 o 4 + 0.429 o 5 + 0.071 o 6 + 0.143 o 7 + 0.286 o 8 ;
A 5 = 0.28 o 1 + 0.28 o 2 + 0.6 o 3 + 0.0 o 4 + 0.6 o 5 + 1.0 o 6 + 0.44 o 7 + 0.44 o 8 ;
A 6 = 0.0 o 1 + 0.05 o 2 + 0.05 o 3 + 0.238 o 4 + 0.1 o 5 + 0.225 o 6 + 0.0 o 7 + 1.0 o 8 .
(3) Take a = 0.3. Based on Formulae (4)–(6), the calculated weights are presented in Table 12.
(4) Based on Table 9, the FPIS and FNIS are directly obtained,
FPIS = ( 1 , 1 , 1 , 1 , 1 , 1 ) , FNIS = ( 0 , 0 , 0 , 0 , 0 , 0 ) .
(5) Take k = 0.5. By means of Formulae (9)–(11), the values of S, R, and Q are shown in Table 13.
(6) Then, we construct the fuzzy sets corresponding to S , R , and Q , which are
T S = 0.6427 o 1 + 0.7218 o 2 + 0.3304 o 3 + 0.9075 o 4 + 0.6100 o 5 + 0.5679 o 6 + 0.5689 o 7 + 0.2894 o 8 ; T R = 0.1553 o 1 + 0.1475 o 2 + 0.1475 o 3 + 0.2172 o 4 + 0.1668 o 5 + 0.1571 o 6 + 0.1553 o 7 + 0.1025 o 8 ; T Q = 0.5158 o 1 + 0.5460 o 2 + 0.2293 o 3 + 1.0 o 4 + 0.5397 o 5 + 0.4631 o 6 + 0.4561 o 7 + 0.0 o 8 .
In turn, the upper and lower approximations of T S , T R and T Q can be computed (see Table 14 and Table 15).
(7) Take = B . We calculate the sorting functions C S , C R , and C Q by Formulae (15)–(17), (see Table 16).
(8) The corresponding ranks derived from C S , C R , and C Q are shown in Table 17.
(9) Ultimately, the rank of these alternatives is o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 , and the CO solutions are o 8 and o 3 .

5. Comparative Analysis

In this part, to showcase the performance of OPCAPFRS–VIKOR methods, we conduct comparative assessments of our method against several existing decision-making approaches, among which the OWA approach [47], the OWGA approach [48], the TOPSIS approach [49], J’s approach [50], Z’s approach [6], ZS’s approach [51], and YS’s approach [52]. Below, when the OAPFRS, PAPFRS, and CAPFRS are used in the OPCAPFRS–VIKOR method, we denote them as OGRNVRS, PGRNVRS, and CGRNVRS, respectively. The rationale for selecting these methods for comparison is outlined as follows:
(1) As traditional decision-making techniques, the OWA approach, the OWGA approach, and the TOPSIS approach share a fundamental applicability with our suggested approach: all of them can be employed to tackle problems with real-valued information.
(2) Both J’s approach and Z’s approach are innovative TOPSIS approaches incorporating rough set models, while our OPCAPFRS–VIKOR method is an improved approach that integrates the FRS model.
(3) Both ZS’s method and YS’s method are recent approaches that integrate RST to address MADM problems, with the former additionally incorporating overlap function and grouping function. Therefore, the results of these two experiments offer valuable insights for our research.
As is commonly accepted, the primary aims of most decision-making methods lie in identifying the optimal alternative and developing a valid ranking mechanism. Thus, we strengthened our comparative study by prioritizing these two aspects.
The ranking outcomes produced by various decision-making techniques are tabulated in Table 18 and graphically depicted in Figure 3; furthermore, the ranking scores associated with these approaches are presented in Figure 4.
Drawing on these figures and table, we provide a detailed account of the ranking outcomes for the aforementioned eight decision-making approaches.
(1) Comparative evaluation of the ranking scheme.
As indicated in Table 18, the ranking generated by our OGRNVRS method is identical to that produced by the TOPSIS method, while the ranking from our PGRNVRS method matches that of YS’s method and the ranking from our CGRNVRS method matches that of the OWA method. Furthermore, the rankings derived from other methods exhibit a high degree of similarity to those from our OGRNVRS, PGRNVRS, and CGRNVRS methods. The OGRNVRS method yields two CO solutions o 8 , o 3 , while both PGRNVRS and CGRNVRS methods yield only one CO solution, which is o 8 . This evidence confirms the validity and feasibility of the OPCAPFRS–VIKOR method.
(2) Comparative evaluation of the optimal alternative.
Other than ZS’s method and Z’s method, the best and second-best alternatives identified by other methods are consistent with those from our OGRNVRS, PGRNVRS, and CGRNVRS methods, which are o 8 and o 3 , respectively. Notably, these two methods also select o 8 and o 3 as their top two alternatives, though their order differs. This consistency with most of the aforementioned methods demonstrates the effectiveness and rationality of our methods.
Furthermore, it is readily apparent from Figure 4 that the trend in the ranking values generated by the OWA approach, the TOPSIS approach, Z’s approach, ZS’s approach, and YS’s approach is consistent with that of our methods. This consistent trend validates the feasibility and reliability of our methods.
Table 19 presents the time and space complexities of the eight different methods, where p denotes the number of alternatives and q denotes the number of attributes. This result indicates that our approach introduces a certain degree of additional algorithmic complexity in both time and space, as is often expected when enhancing model performance.
For a more rigorous comparison of the ranking outcomes generated by different methods, the Pearson Correlation Coefficient (PCC) was utilized to quantify the correlations among the methods. Specific results are displayed in Table 20 and Figure 5.
Specifically, an absolute PCC value greater than 0.6 signifies a high degree of correlation between any pair of methodologies. Based on Table 20, it can be observed that the rankings derived from all the mentioned methods are strongly correlated with those of our methods. In addition, it is observed that while the PCC of our OGRNVRS method with J’s method is relatively low (0.792), the PCC values of our PGRNVRS and CGRNVRS methods with J’s method are notably high, 0.903 and 0.910, respectively. A similar situation is found when comparing with Z’s method. Moreover, as newly proposed methods in 2024 and 2025, respectively, both ZS’s method and YS’s method exhibit high PCC values (all exceeding 0.8) with our OPCAPFRS–VIKOR methods. This comparative analysis finally proves the effectiveness and rationality of our proposed method.
Remark 7.
In addressing decision-making problems, the OPCAPFRS–VIKOR method offers three main advantages compared with other methods:
(1) The weights determined by other methods are usually given by experts based on their experience, exhibiting strong subjectivity. The weights proposed in this paper (Equation (6)) are obtained by calculating fuzzy upper and lower accuracy measures and linearly combining them, which constitutes an objective weight calculation method. It derives weights through correlations between data, thereby avoiding the arbitrariness and subjectivity of expert-dependent traditional methods.
(2) We verified the correlation between the obtained ranking and those from other methods using the PCC. The results show that our obtained results exhibit a strong positive correlation with those from other methods, which confirms the effectiveness of the OPCAPFRS–VIKOR method.
(3) Our methods provide three strategies—OP, PE, and CO—each yielding CO solution(s), thereby offering decision-makers a broader range of choices. Additionally, the balance between group utility and individual regret can be adjusted, enabling our method to incorporate experts’ subjective preferences alongside objective judgments. These characteristics collectively illustrate the practical applicability and reliability of the OPCAPFRS–VIKOR method.

6. Sensitivity Analysis

This section intends to investigate the influence of the parameters incorporated in our proposed method on the ultimate decision. Our method comprises seven parameters in total, namely, α , β , η , k , and a, as well as the semi-grouping function ⊕ and ⊖. To assess the impact of each parameter, we maintained other parameters at constant levels and observed the variations in ranks and CO solution(s) in the remaining one.
(1) Parametric sensitivity assessment for α .
Parameters were held constant at β = 0.2, η = 0.5, k = 0.5, a = 0.3, = B , and = B , with α systematically incremented by 0.1. This incremental variation yielded the ranking outcomes and compromise optimum solutions documented in Table 21.
Based on Table 21, it is readily apparent that when α is adjusted, the CO solutions o 8 and o 3 remain consistent, being the top-ranked alternative and second-ranked alternative, while some other alternatives undergo changes in the ranking positions. In addition, it is noticed that the main changes occur among o 1 , o 2 , and o 5 : o 1 ranks higher when α ∈ [0.1, 0.6], while o 5 does so when α ∈ [0.7, 1]. This indicates that the ranking results of alternatives exhibit a reasonable degree of sensitivity to changes in the key parameter α : when α is adjusted within its predefined reasonable range, the relative rankings of non-core alternatives undergo adaptive adjustments that align with the inherent logic of the variable precision rough set framework. Notably, this sensitivity does not compromise the reliability of decision outcomes; on the contrary, the proposed method maintains robust stability in generating CO solutions. This dual characteristic—“reasonable sensitivity to parameter adjustments” and “stable output of core optimal solutions”—not only validates the rigor of the proposed method but also enhances its practical applicability: decision-makers can flexibly adjust the parameter α according to their own risk preferences or practical scenario requirements, while ensuring the reliability and clarity of the CO solutions. For MADM problems involving uncertain or noisy data, this balance between sensitivity and stability is particularly critical, as it not only meets the model’s adaptability needs but also achieves the goal of decision consistency. Similar phenomena arise with the variation in the following parameters, analogous to that of α , which we briefly analyze next.
(2) Parametric sensitivity assessment for β .
Parameters were held constant at α = 0.8, η = 0.5, k = 0.5, a = 0.3, = B , and = B , with β systematically incremented by 0.1. As we adjusted the value of β , the corresponding ranking results and CO solution(s) were obtained and are reported in Table 22.
As revealed in Table 22, the CO solutions are consistently o 8 and o 3 , except when β = 0.9 Moreover, for β ∈ [0, 0.7], only the ranking positions of o 6 and o 7 shift, while for β ∈ [0.8, 0.9], the ranking adjustments occur only at o 1 and o 5 . This demonstrates that the ranking result and the CO solutions are sensitive to β to a certain extent.
(3) Parametric sensitivity assessment for η .
The parameters α = 0.8, β = 0.2, k = 0.5, a = 0.3, = B , and = B were kept constant, and the step size for η was specified as 0.1. Through modifying the value of η , the ranking results and CO solution(s) were derived and are summarized in Table 23.
Based on Table 23, we observe that two optimal alternatives ( o 3 and o 8 ) emerge as parameter η varies, and so does the CO solution. This enables decision-makers to choose a CPU that aligns with their requirements. When o 3 is the optimal alternative, the ranking remains unchanged; however, when o 8 is the optimal alternative, the positions of o 1 , o 2 , o 4 , o 5 , and o 7 shift. These facts indicate that both CO solutions and ranking results given by our method are sensitive to changes in η . Additionally, the top two alternatives remain consistent overall, demonstrating the stability of our approach.
(4) Parametric sensitivity assessment for k .
We fixed α = 0.8, β = 0.2, η = 0.5, a = 0.3, = B , and = B and set the step size of k to 0.1. When we altered the value of k , the ranking results and CO solution(s) shown in Table 24 were obtained.
Table 24 indicates that with variations in k , the only ranking differences involve o 6 and o 7 : o 6 holds a higher position when k [0.6, 1], in which case the decision-maker prioritizes group utility; o 7 ranks higher when k [0, 0.5], signifying a preference for individual advantages. Additionally, for k [0, 0.3], the CO solution is o 8 , and for k [0.4, 1], the CO solutions are o 8 and o 3 . Such variations validate the sensitivity of our method to k .
(5) Parametric sensitivity assessment for a.
Parameters were held constant at α = 0.8, β = 0.2, η = 0.5, k = 0.5, = B and = B , with a systematically incremented by 0.1.
Similarly, Table 25 shows that as parameter a changes, the ranking variations are limited to o 6 and o 7 : o 6 ranks higher when a [0, 0.2], where the decision-maker leans more toward weights calculated by lower accuracy measure; o 7 ranks higher when a [ 0.3 , 1 ] , indicating a greater preference for weights derived from the upper accuracy measure. Moreover, the CO solutions stay unchanged (namely o 8 and o 3 ), showing the stability of our proposed method.
Parameter a is designed to balance the upper approximation accuracy measure and the lower approximation accuracy measure within the OPCAPFRS framework, taking values in the range a [ 0 , 1 ] . Its “subjectivity” essentially reflects the decision-maker’s quantitative preference for the “absolute certainty” (lower approximation) and “maximum possibility” (upper approximation) of decision classes, rather than arbitrary and unfounded subjective judgment. Unlike subjective weighting methods (e.g., the Analytic Hierarchy Process) that rely on expert scores to directly determine weights, parameter a does not alter the objective calculation logic of weights—it only adjusts the relative importance of the two objectively derived accuracy measures. Thus, it maintains the core advantage of “objective weight calculation” while accommodating the preference needs in practical decision-making.
(6) Parametric sensitivity assessment for ⊕ and ⊖.
We fix the α = 0.8, β = 0.2, η = 0.5, k = 0.5, and a = 0.3. Changing the operator ⊕ and ⊖, we have the following ranks and CO solution(s) shown in Table 26.
As shown in Table 26, when choosing ( , ) = ( 2 , 2 ) , ( M , M ) , or ( P , P ) , the ranking result is identical to that generated by our CGRNVRS method, while the rankings corresponding to other types of ⊕ and ⊖ differ significantly. Also, we notice that three sets of CO solutions are obtained: ( o 8 ), ( o 8 , o 3 ), and ( o 8 , o 1 ). This indicates that the ranking results and CO solutions are sensitive to ⊕ and ⊖. Furthermore, the best two alternatives stay unchanged with changes in ⊕ and ⊖, confirming the validity and stability of our method. Based on the analysis results, we find that ( B , B ) exhibits better stability while maintaining sensitivity; thus, we will adopt ( B , B ) in future research.
Remark 8.
In summary, parameter variations lead to some changes in the ranking results, indicating a certain degree of sensitivity of our method to parameter changes. Additionally, it is noteworthy that the optimal alternative remains largely unchanged, reflecting the relative stability of our approach. Furthermore, when η is varied, two best alternatives ( o 8 and o 3 ) are identified, allowing decision-makers to select an η value based on their personal preferences.

7. Three Experimental Analyses

In this part, three experiments were conducted to further validate the performance of our approach. Initially, by incorporating additional alternative objects, we examined the method’s performance in terms of both the ranking scheme and the CO solution(s). In addition, we demonstrated the applicability of our method to large datasets using an air quality dataset. All three experiments were based on the OGRNVRS method. The three experiments were all carried out on a personal computer running Jupyter Notebook 4.11.1, operating on a 64-bit Windows 11 system. The computer was powered by an Intel(R) Core(TM) i7-10510U CPU running at a speed of 1.80 GHz and supported by 32.0 GB of RAM.

7.1. Experimental Analysis of Ranking Scheme and CO Solution(s)

We first expanded Dataset U 1 = { o 1 , o 2 , , o 8 } (introduced in Section 4.4) into a sequence of nested datasets U 1 U 2 U 3 U 4 U 5 U 6 , with the number of alternatives being 8, 16, 24, 32, 40, and 48, respectively. Ranking similarity between expanded datasets and U 1 served as an indicator of our method’s high performance. The data for U 6 are displayed in Table 27.
We performed experiments on all six datasets, with the ranking results and the CO solution(s) summarized in Table 28 and visualized in Figure 6. Table 28 systematically presents the ranking results and CO solutions for the six datasets with gradually expanding object sizes (from 8 in U 1 to 48 in U 6 ). This gradient design of object sizes allowed us to verify the method’s performance in small, medium, and large-scale decision-making scenarios, serving as a key supplement to the robustness test. Alternative o 8 consistently maintained the top-ranked position across all datasets. This indicates that o 8 had a “scale-independent comprehensive advantage” in terms of attribute performance: even as the number of decision objects expanded from 8 to 48 (a 6-fold increase), its dominance over other alternatives remained undiminished, which directly reflects the strong stability of the method in identifying the core optimal alternative. Moreover, it is observable in Figure 6 that the six ranking schemes share highly similar trends. To quantify this similarity, the order similarity between U 1 and each of the other five datasets was calculated using the method proposed in [53]. The computed order similarities between U 1 and U 2 , U 3 , U 4 , U 5 , and U 6 were all 96.43%. Also, the CO solution(s) were o 8 and o 3 for U 1 and U 2 and o 8 for U 3 U 6 . These results confirm the excellent robustness and validity of our method.

7.2. Experimental Analysis on CO Solution(s)

Analysis of the ranking scheme revealed that o 8 consistently emerged in the CO solution(s). To further validate this finding, we examined whether o 8 remained in the CO solution(s) when random datasets were sampled from U 6 .
With reference to the data in Table 27, 20 random datasets were selected, each containing five alternatives, including o 8 . The ranking results and CO solution(s) for these datasets are shown in Table 29. Table 29 confirms that o 8 remained in the CO solution(s) across all 20 datasets, which verifies the stability of our method.

7.3. Experimental Analysis on Large Datasets

This section verifies the applicability of our method to large-scale datasets by using a dataset from the UCI database.
The dataset, related to air quality (https://archive.ics.uci.edu/ml/datasets/Air+Quality (accessed on 29 November 2025)) comprises 15 attributes and 9357 alternatives. We selected four attributes (including both beneficial and cost attributes) to streamline the model and enhance interpretability. The four attributes were CO (GT), PT08.S2 (NMHC), PT08.S3 (NOx), and PT08.S4 (NO2), with PT08.S3 (NOx) being a cost attribute and the remaining three being benefit attributes. With parameters set to α = 0.8, β = 0.2, k = 0.5, a = 0.3, = B , and = B , the ranking of the 9357 alternatives via our OGRNVRS method is displayed in Figure 7. Under the above parameter settings, the running time of the algorithm was 125.4 s.
As shown in Figure 7, the ranking result was complete, with the 6193th alternative identified as the optimal solution and the 6193th, 6194th, 5521th, and 5737th alternatives being the CO solutions. This confirms our approach is viable for complex datasets containing abundant alternatives.
Remark 9.
Incomplete rankings may emerge in certain datasets or those with a large volume of data. This is a common phenomenon, as every data processing approach may suffer from information loss. To avoid this problem in our method, the following two schemes are put forward:
(1) Our proposed OGRNVRS and PGRNVRS methods incorporate four parameters each, while the CGRNVRS method includes five parameters. Additionally, all our methods offer a variety of semi-grouping functions; thus, adjusting these parameters and functions can potentially produce a complete ordering.
(2) Numerous GFRNSO can be constructed using the fuzzy decision matrix, but only one was employed in the present experiments. Therefore, adopting a more refined GFRNSO to optimize the weights can result in a complete ranking.

8. Conclusions and Future Work

This paper contributed a theory of OPCAPFRSs based on GFRNSs via semi-grouping functions and established a novel MADM approach by integrating it with VIKOR. The main conclusions and future research avenues are outlined below.
  • This study integrated concepts from VPRSs, MGRSs, GRNSs, and fuzzy theory. Based on this integration, we proposed the OPCAPFRS within the GFRNS framework. A detailed examination of the fundamental properties of the OPCAPFRS was conducted. Notably, under the condition that the GFRNSO is anti-reflexive, we confirmed that the new approximation operators satisfied a GIP. This finding addresses, to a significant degree, the common limitation where standard VPRS models often fail to maintain this property. Building upon the GIP, this study introduced the concept of accuracy measure and then calculated two types of attribute weights before linearly combining them as the new weight.
  • We designed a novel decision-making approach by integrating the VIKOR approach with the OPCAPFRS method (OPCAPFRS–VIKOR method). The practical effectiveness of this method was validated through its application to a real-world MADM problem: selecting an optimal CPU. Comparative assessments, parameter sensitivity investigations, and experimental findings jointly verified the feasibility and merits of the proposed approach. The advantage of the OPCAPFRS–VIKOR method lies in its objectivity: it relies more on calculations rather than expert experience, making it suitable for scenarios with large datasets. Typically, cases of general MADM methods only involve dozens to hundreds of samples; in contrast, our method can handle thousands or even tens of thousands of samples.
The limitation of the OPCAPFRS–VIKOR method is that it cannot handle incomplete data (i.e., data with missing values), which we will address in our future work. Moreover, we found that VPRSs and MGRSs were also commonly applied to multi-attribute group decision making [22] or attribute reduction [9,19]. In our future work, we plan to apply the OPCAPFRS method to address datasets with missing data and extend its application scenarios to group decision-making and attribute reduction.

Author Contributions

Conceptualization, X.M. and Y.X.; methodology, Y.X.; software, X.M.; validation, X.M.; formal analysis, X.M.; investigation, X.M.; resources, X.M.; writing—original draft preparation, X.M. and Y.X.; writing—review and editing, Y.X.; supervision, X.M. and Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (12371462, 12231007) and Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX25-1558).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors gratefully acknowledge the highly constructive comments provided by the editor and reviewers. Furthermore, the authors would like to thank Wei Yao from NUIST for valuable discussions and helpful suggestions during the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Dudois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  3. Wen, X.F.; Zhang, X.H.; Lei, T. Intuitionistic fuzzy (IF) overlap functions and IF-rough sets with applications. Symmetry 2021, 13, 1494. [Google Scholar] [CrossRef]
  4. Yao, W.; She, Y.H.; Lu, L.X. Metric-based L-fuzzy rough sets: Approximation operators and definable fuzzy subsets. Knowl.-Based Syst. 2019, 163, 91–102. [Google Scholar] [CrossRef]
  5. Xie, J.W.; Li, L.Q.; Zou, D.D. Two granular variable precision L-fuzzy (O, G)-rough sets. Comput. Appl. Math. 2025, 44, 155. [Google Scholar] [CrossRef]
  6. Zhang, K.; Zhan, J.M.; Wu, W.Z.; Alcantud, J.C.R. Fuzzy β-covering based (I,T)-fuzzy rough set models and applications to multi-attribute decision-making. Comput. Ind. Eng. 2019, 128, 605–621. [Google Scholar] [CrossRef]
  7. Han, N.N.; Qiao, J.S.; Li, T.B.; Ding, W.P. Multigranulation fuzzy probabilistic rough sets induced by overlap functions and their applications. Fuzzy Sets Syst. 2024, 481, 108893. [Google Scholar] [CrossRef]
  8. Zhang, X.H.; Li, M.Y.; Shao, S.T.; Wang, J.Q. (I,O)-fuzzy rough sets based on overlap functions with their applications to feature selection and image edge extraction. IEEE Trans. Fuzzy Syst. 2024, 32, 1796–1809. [Google Scholar] [CrossRef]
  9. Zhang, X.H.; Qu, Q.Q.; Wang, J.Q. Variable precision fuzzy rough sets based on overlap functions with application to tumor classification. Inf. Sci. 2024, 666, 120451. [Google Scholar] [CrossRef]
  10. Zhang, X.H.; Li, M.Y.; Liu, H. Overlap functions-based fuzzy mathematical morphological operators and their applications in image edge extraction. Fractal Fract. 2023, 7, 465. [Google Scholar] [CrossRef]
  11. Çekik, R. Effective text classification through supervised rough set-based term weighting. Symmetry 2025, 17, 90. [Google Scholar] [CrossRef]
  12. Ziarko, W. Variable precision rough set model. J. Comput. Syst. Sci. 1993, 39–59. [Google Scholar] [CrossRef]
  13. D’eer, L.; Verbiest, N.; Cornelis, C.; Godo, L. A comprehensive study of implicator-conjunctor-based and noise-tolerant fuzzy rough sets: Definitions, properties and robustness analysis. Fuzzy Sets Syst. 2015, 275, 1–38. [Google Scholar] [CrossRef]
  14. Jia, C.Z.; Li, L.Q.; Li, X.R. A three-way decision combining multi-granularity variable precision fuzzy rough set and TOPSIS method. Int. J. Approx. Reason. 2025, 176, 109318. [Google Scholar] [CrossRef]
  15. Qian, Y.H.; Liang, J.Y.; Yao, Y.Y.; Dang, C.Y. MGRS: A multi-granulation rough set. Inf. Sci. 2010, 180, 949–970. [Google Scholar] [CrossRef]
  16. Qian, Y.H.; Li, S.Y.; Liang, J.Y.; Shi, Z.Z.; Wang, F. Pessimistic rough set based decisions: A multigranulation fusion strategy. Inf. Sci. 2014, 264, 196–210. [Google Scholar] [CrossRef]
  17. Lin, G.; Qian, Y.; Li, J. NMGRS: Neighborhood-based multigranulation rough sets. Int. J. Approx. Reason. 2012, 53, 1080–1093. [Google Scholar] [CrossRef]
  18. Yao, Y.Y.; She, Y.H. Rough set models in multigranulation spaces. Inf. Sci. 2016, 327, 40–56. [Google Scholar] [CrossRef]
  19. Chen, J.Y.; Zhu, P. A variable precision multigranulation rough set model and attribute reduction. Soft Comput. 2023, 27, 85–106. [Google Scholar] [CrossRef]
  20. Feng, T.; Mi, J.S. Variable precision multigranulation decision-theoretic fuzzy rough sets. Knowl. Based Syst. 2016, 91, 93–101. [Google Scholar] [CrossRef]
  21. Shi, Z.Q.; Xie, S.R.; Li, L.Q. Generalized fuzzy neighborhood system-based multigranulation variable precision fuzzy rough sets with double TOPSIS method to MADM. Inf. Sci. 2023, 643, 119251. [Google Scholar] [CrossRef]
  22. Sun, B.Z.; Zhang, X.R.; Qi, C.; Chu, X.L. Neighborhood relation-based variable precision multigranulation Pythagorean fuzzy rough set approach for multi-attribute group decision making. Int. J. Approx. Reason. 2022, 151, 1–20. [Google Scholar] [CrossRef]
  23. Sun, S.B.; Li, L.Q.; Hu, K. A new approach to rough set based on remote neighborhood systems. Math. Probl. Eng. 2019, 8712010. [Google Scholar] [CrossRef]
  24. Lin, T.Y. Neighborhood systems: A qualitative theory for fuzzy and rough sets. In Advances in Machine Intelligence and Soft Computing; Wang, P., Ed.; Duke University: Durham, NC, USA, 1997; Volume 4, pp. 132–155. [Google Scholar]
  25. Li, L.Q.; Jin, Q.; Yao, B.X. A rough set model based on fuzzifying neighborhood systems. Soft Comput. 2020, 24, 6085–6099. [Google Scholar] [CrossRef]
  26. Li, L.Q.; Yao, B.X.; Zhan, J.M.; Jin, Q. L-fuzzifying approximation operators derived from general L-fuzzifying neighborhood systems. Int. J. Mach. Learn. Cybern. 2021, 12, 1343–1367. [Google Scholar] [CrossRef]
  27. Li, L.Q.; Jin, Q. Stratified L-convex groups. Hacet. J. Math. Stat. 2025, 54, 2335–2349. [Google Scholar] [CrossRef]
  28. Zhao, F.F.; Li, L.Q.; Sun, S.B. Rough approximation operators based on quantale-valued fuzzy generalized neighborhood systems. Iran. J. Fuzzy Syst. 2019, 16, 53–63. [Google Scholar]
  29. Sun, S.B.; Li, L.Q.; Hu, K.; Ramadan, A.A. L-fuzzy upper approximation operators associated with L-generalized fuzzy remote neighborhood systems of Lfuzzy points. AIMS Math. 2020, 5, 5368–5652. [Google Scholar] [CrossRef]
  30. Shi, Z.Q.; Li, L.Q.; Bo, C.X.; Zhan, J.M. Two novel three-way decision models based on fuzzy β-covering rough sets and prospect theory under q-rung orthopair fuzzy environments. Expert Syst. Appl. 2024, 244, 123050. [Google Scholar] [CrossRef]
  31. Su, J.; Wang, Y.; Li, J. A novel fuzzy covering rough set model based on generalized overlap functions and its application in MCDM. Symmetry 2023, 15, 647. [Google Scholar] [CrossRef]
  32. Yu, D.Q.; Ye, T. Uncovering the academic evolution of VIKOR method: A comprehensive main path analysis. J. Oper. Res. Soc. 2025, 76, 393–409. [Google Scholar] [CrossRef]
  33. Sanayei, A.; Farid, S.; Mousavi, A.; Yazdankhah. Group decision making process for supplier selection with VIKOR under fuzzy environment. Expert Syst. Appl. 2010, 37, 24–30. [Google Scholar] [CrossRef]
  34. Opricovic, S. Fuzzy VIKOR with an application to water resources planning. Expert Syst. Appl. 2011, 38, 12983–12990. [Google Scholar] [CrossRef]
  35. Madhavi, S.; Santhosh, N.C.; Rajkumar, S.; Praveen, R. Pythagorean fuzzy sets-based VIKOR and TOPSIS-based multi-criteria decision-making model for mitigating resource deletion attacks in WSNs. J. Intell. Fuzzy Syst. 2023, 44, 9441–9459. [Google Scholar] [CrossRef]
  36. Kansal, D.; Kumar, S. Multi-criteria decision-making based on intuitionistic fuzzy exponential knowledge and similarity measure and improved VIKOR method. Granul. Comput. 2024, 9, 26. [Google Scholar] [CrossRef]
  37. Jiang, H.B.; Zhan, J.M.; Sun, B.Z.; Alcantud, J.C.R. An MADM approach to covering-based variable precision fuzzy rough sets: An application to medical diagnosis. Int. J. Mach. Learn. Cybern. 2020, 11, 2181–2207. [Google Scholar] [CrossRef]
  38. Bustince, H.; Fernandez, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap Functions. Nonlinear Anal. Theory Methods Appl. 2010, 72, 1488–1499. [Google Scholar] [CrossRef]
  39. Bustince, H.; Pagola, M.; Mesiar, R.; Hullermeier, E.; Herrera, F. Grouping, overlap, and generalized bientropic functions for fuzzy modeling of pairwise comparisons. IEEE Trans. Fuzzy Syst. 2011, 20, 405–415. [Google Scholar] [CrossRef]
  40. Batista, T.; Bedregal, B.; Moraes, R. Constructing multi-layer classifier ensembles using the choquet integral based on overlap and quasi-overlap functions. Neurocomputing 2022, 500, 413–421. [Google Scholar] [CrossRef]
  41. Ferrero-Jaurrieta, M.; Paiva, R.; Cruz, A.; Bedregal, B.; Zhang, X.H.; Takáč, Z.; López-Molina, C.; Bustince, H. Reduction of complexity using generators of pseudo-overlap and pseudo-grouping functions. Fuzzy Sets Syst. 2024, 490, 109025. [Google Scholar] [CrossRef]
  42. Shen, H.H.; Yao, Q.; Pan, X.D. The fuzzy inference system based on axiomatic fuzzy sets using overlap functions as aggregation operators and its approximation properties. Appl. Intell. 2024, 54, 10414–10437. [Google Scholar] [CrossRef]
  43. Zhang, X.H.; Wang, M.; Bedregal, B.; Li, M.Y.; Liang, R. Semi-overlap functions and novel fuzzy reasoning algorithms with applications. Inf. Sci. 2022, 614, 104–122. [Google Scholar] [CrossRef]
  44. Li, X.R.; Li, L.Q.; Jia, C.Z. Weighted multi-granularity fuzzy probabilistic rough set based on semi-overlapping function and its application in three-way decision. Eng. Appl. Artif. Intell. 2025, 161, 112023. [Google Scholar] [CrossRef]
  45. Li, L.Q.; Jia, C.Z.; Li, X.R. A novel intuitionistic fuzzy VIKOR method to MCDM based on intuitionistic fuzzy β*-covering rough set. Expert Syst. Appl. 2011, 293, 128713. [Google Scholar] [CrossRef]
  46. Yao, W.; Zhang, G.X.; Zhou, C.J. Real-valued hemimetric-based fuzzy rough sets and an application to contour extraction of digital surfaces. Fuzzy Sets Syst. 2023, 459, 201–229. [Google Scholar] [CrossRef]
  47. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  48. Xu, Z.S.; Da, Q.L. The ordered weighted geometric averaging operators. Int. J. Intell. Syst. 2002, 17, 709–716. [Google Scholar] [CrossRef]
  49. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  50. Jiang, H.B.; Zhan, J.M.; Chen, D.G. Covering-based variable precision (I,T)-fuzzy rough sets with applications to multiattribute decision-making. IEEE Trans. Fuzzy Syst. 2018, 27, 1558–1572. [Google Scholar] [CrossRef]
  51. Shi, Z.Q.; Li, L.Q.; Xie, S.R.; Xie, J.L. The variable precision fuzzy rough set based on overlap and grouping functions with double weight method to MADM. Appl. Intell. 2024, 54, 7696–7715. [Google Scholar] [CrossRef]
  52. Sun, Y.; Pang, B.; Mi, J.S.; Wu, W.Z. Maximal consistent blocks-based optimistic and pessimistic probabilistic rough fuzzy sets and their applications in three-way multiple attribute decision-making. Int. J. Approx. Reason. 2025, 187, 109529. [Google Scholar] [CrossRef]
  53. Zhang, K.; Zhan, J.M.; Yao, Y.Y. TOPSIS method based on a fuzzy covering approximation space: An application to biological nano-materials selection. Inf. Sci. 2019, 502, 297–329. [Google Scholar] [CrossRef]
Figure 1. Relationship between grouping functions, semi-grouping functions, and t-conorms.
Figure 1. Relationship between grouping functions, semi-grouping functions, and t-conorms.
Symmetry 18 00084 g001
Figure 2. Mind map of this paper.
Figure 2. Mind map of this paper.
Symmetry 18 00084 g002
Figure 3. Comparison of ranks via different approaches.
Figure 3. Comparison of ranks via different approaches.
Symmetry 18 00084 g003
Figure 4. Comparison of ranking values via different approaches.
Figure 4. Comparison of ranking values via different approaches.
Symmetry 18 00084 g004
Figure 5. PCCs between different methods.
Figure 5. PCCs between different methods.
Symmetry 18 00084 g005
Figure 6. Comparison of ranks under different datasets.
Figure 6. Comparison of ranks under different datasets.
Symmetry 18 00084 g006
Figure 7. The order of the 9357 data elements given by the OPCAPFRS method.
Figure 7. The order of the 9357 data elements given by the OPCAPFRS method.
Symmetry 18 00084 g007
Table 1. Short names of some common words.
Table 1. Short names of some common words.
Full NameShort Name
Variable precision fuzzy rough setVPFRS
Multi-granularity fuzzy rough setMGFRS
Multi-attribute decision-makingMADM
OptimisticOP
PessimisticPE
CompromiseCO
Optimistic, pessimistic, and compromise variable precision fuzzy rough setOPCAPFRS
Generalized inclusion propertyGIP
VlseKriterijumska Optimizacija Kompromisno ResenjeVIKOR
Remote neighborhood systemRNS
Generalized neighborhood systemGNS
Generalized fuzzy neighborhood systemGFNS
Generalized fuzzy remote neighborhood systemGFRNS
Variable precision rough setVPRS
Rough set theoryRST
Fuzzy rough setFRS
Multi-granulation rough setsMGRS
OP ( α , β ) -variable precision fuzzy rough setOAPFRS
PE ( α , β ) -variable precision fuzzy rough setPAPFRS
CO ( α , β ) -variable precision fuzzy rough setCAPFRS
Table 2. Some examples of ⊕ and ⊖.
Table 2. Some examples of ⊕ and ⊖.
SG 6 SG 7 SG 8 DN
ζ M ω = max { ζ , ω } ζ M ω = 0 , if ζ ω ζ , otherwise ×
ζ P ω = ζ + ω ζ ω ζ P ω = 0 , if ζ ω ζ ω 1 ω , otherwise ×
ζ 2 ω = 1 ( 1 ζ ) 2 ( 1 ω ) 2 ζ 2 ω = 0 , if ( 1 ω ) 2 1 ζ 1 1 ζ ( 1 ω ) 2 , otherwise ××××
ζ B ω = ζ + ω 2 ζ ω 2 ζ ω , if ζ + ω 2 1 , otherwise ζ B ω = 0 , if ζ 2 ω 1 2 ζ ζ ω ω 1 2 ω + ζ , otherwise ××××
ζ L ω = min { 1 , ζ + ω } ζ L ω = 0 , if ζ ω ζ ω , otherwise
ζ 1 ω = min { 1 , ζ + ω ζ ω 1 ζ ω } ζ 1 ω = 0 , if ζ ω ζ ω 1 ω + ζ ω , otherwise ×
Table 3. Comparison of different RSTs.
Table 3. Comparison of different RSTs.
ModelMulti-GranularityInclusion PropertyFault ToleranceFuzzy Evaluation
VPRS×××
MGRS××
FRS××
OPCAPFRS
Table 4. The antireflexive fuzzy remote neighborhood RN .
Table 4. The antireflexive fuzzy remote neighborhood RN .
RN o 1 o 2 o 3
o 1 00.20.1
o 2 0.200.4
o 3 0.10.40
Table 5. MADM matrix.
Table 5. MADM matrix.
U \ A A 1 ˜ A 2 ˜ A q ˜
o 1 v 11 v 12 v 1 q
o 2 v 21 v 22 v 2 q
o p v p 1 v p 2 v p q
Table 6. Fuzzy-decision making matrix.
Table 6. Fuzzy-decision making matrix.
U \ A A 1 ˜ A 2 ˜ A q ˜
o 1 V 11 V 12 V 1 q
o 2 V 21 V 22 V 2 q
o p V p 1 V p 2 V p q
Table 7. Time complexity of Algorithm 1.
Table 7. Time complexity of Algorithm 1.
Step 1Step 2Step 3Step 4Step 5Step 6Step 7Overall
Time Complexity O ( p q ) O ( p 2 q 2 ) O ( q ) O ( p q ) O ( p q ) O ( p 2 q ) O ( p ) O ( p 2 q 2 )
Table 8. Raw data.
Table 8. Raw data.
U \ A A 1 ˜ A 2 ˜ A 3 ˜ A 4 ˜ A 5 ˜ A 6 ˜
o 1 24,00080002632816
o 2 24,00040004832824
o 3 32,00016,000302561624
o 4 32,000200014032154
o 5 16,0004000381281632
o 6 32,000200050482652
o 7 32,000800026641216
o 8 64,0008000309612176
Table 9. Fuzzy-decision making matrix.
Table 9. Fuzzy-decision making matrix.
U \ A A 1 ˜ A 2 ˜ A 3 ˜ A 4 ˜ A 5 ˜ A 6 ˜
o 1 0.1670.4291.00.00.280.0
o 2 0.1670.1430.8070.00.280.05
o 3 0.3331.00.9651.00.60.05
o 4 0.3330.00.00.00.00.238
o 5 0.00.1430.8950.4290.60.1
o 6 0.3330.00.7890.0711.00.225
o 7 0.3330.4291.00.1430.440.0
o 8 1.00.4290.9650.2860.441.0
Table 10. Upper approximation matrix of A j .
Table 10. Upper approximation matrix of A j .
U \ A A 1 A 2 A 3 A 4 A 5 A 6
o 1 0.49960.60040.88890.4290.84440.238
o 2 0.49960.60040.88890.4290.84440.2822
o 3 0.49960.88890.88890.88890.750.2822
o 4 0.499600.792500.60.3845
o 5 0.3330.66740.88890.66740.880.1818
o 6 0.820.4290.88210.48590.88890.82
o 7 0.49960.66740.88890.66740.87060.238
o 8 0.88890.60040.88890.44480.820.8889
Table 11. Lower approximation matrix of A j .
Table 11. Lower approximation matrix of A j .
U \ A A 1 A 2 A 3 A 4 A 5 A 6
o 1 0.11110.24540.66700.11110.16280.1111
o 2 0.11110.11110.65260.11110.16280.1111
o 3 0.19980.51010.83570.51010.42860.1111
o 4 0.19980.11110.11110.11110.11110.1351
o 5 0.11110.11110.65860.22100.32260.1111
o 6 0.19980.11110.65150.11110.50720.1268
o 7 0.18340.24540.66700.11110.22460.1111
o 8 0.64920.27310.66400.16690.28210.6373
Table 12. Weights.
Table 12. Weights.
W \ A A 1 A 2 A 3 A 4 A 5 A 6
W 0.16680.15710.21720.14360.16000.1553
Table 13. Group utility value, individual regret value, and CO value.
Table 13. Group utility value, individual regret value, and CO value.
VIKOR\ U o 1 o 2 o 3 o 4 o 5 o 6 o 7 o 8
S 0.64270.72180.33040.90750.61000.56790.56890.2894
R 0.15530.14750.14750.21720.16680.15710.15530.1025
Q 0.51580.54600.22931.00000.53970.46310.45610.0000
Table 14. Upper approximation of T S , T R , and T Q .
Table 14. Upper approximation of T S , T R , and T Q .
Fuzzy Sets\ U o 1 o 2 o 3 o 4 o 5 o 6 o 7 o 8
T S 0.78250.83840.49670.88890.75780.72440.72520.4489
T R 0.57800.74750.25710.85230.47360.28580.29120.1860
T Q 0.76140.82710.37310.88890.73130.68960.69060.2894
Table 15. Lower approximation of T S , T R and T Q .
Table 15. Lower approximation of T S , T R and T Q .
Fuzzy Sets\ U o 1 o 2 o 3 o 4 o 5 o 6 o 7 o 8
T S 0.39000.41450.19790.55620.40930.34980.34470.1692
T R 0.11110.11110.11110.12180.11110.11110.11110.1111
T Q 0.34760.37550.12950.65990.36960.30130.29540.1111
Table 16. Sorting functions.
Table 16. Sorting functions.
Ranking Values\ U o 1 o 2 o 3 o 4 o 5 o 6 o 7 o 8
C S 0.67940.74680.38150.82230.65640.61290.61280.3374
C R 0.42770.60680.19060.74720.33880.20800.21130.1502
C Q 0.65060.72910.27110.83250.62320.57020.57000.2102
Table 17. Ranks.
Table 17. Ranks.
VIKOR\ U Ranking Results
C S o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4
C R o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4
C Q o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4
Table 18. Ranking for different approaches.
Table 18. Ranking for different approaches.
ApproachesRanking ResultsCompromise Solutions
OWA [47] o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4
OWGA [48] o 8 > o 3 > o 7 > o 6 > o 5 > o 2 > o 1 > o 4
TOPSIS [49] o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4
J’s method [50] o 8 > o 3 > o 4 > o 6 > o 5 > o 1 > o 7 > o 2
Z’s method [6] o 3 > o 8 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4
ZS’s method [51] o 3 > o 8 > o 7 > o 5 > o 6 > o 1 > o 2 > o 4
YS’s method [52] o 8 > o 3 > o 6 > o 7 > o 1 > o 5 > o 2 > o 4
OGRNVRS o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
PGRNVRS o 8 > o 3 > o 6 > o 7 > o 1 > o 5 > o 2 > o 4 o 8
CGRNVRS o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8
Table 19. Time and space complexity of the different methods.
Table 19. Time and space complexity of the different methods.
ApproachesTime ComplexitySpace Complexity
OWA [47] O ( p · q log 2 q ) O ( p q )
OWGA [48] O ( p · q log 2 q + p log 2 p ) O ( p q )
TOPSIS [49] O ( p q + p log 2 p ) O ( p q )
J’s method [50] O ( p 2 ) O ( p 2 )
Z’s method [6] O ( p 2 ) O ( p 2 )
ZS’s method [51] O ( p 2 q ) O ( p 2 q )
YS’s method [52] O ( p 2 q ) O ( p 2 q )
OPCAPFRS–VIKOR O ( p 2 q 2 ) O ( p 2 q )
Table 20. PCCs between different methods.
Table 20. PCCs between different methods.
MethodsOWAOWGATOPSISJ’sZ’sZS’sYS’sOGRNVRSPGRNVRSCGRNVRS
OWA1.0000.9390.9640.7280.8910.9900.9820.9930.9130.877
OWGA-1.0000.9750.5000.9020.9110.9610.9030.7700.746
TOPSIS--1.0000.5430.9030.9520.9890.9350.8060.776
J’s method---1.0000.4340.7110.6480.7920.9030.910
Z’s method----1.0000.8880.8760.8480.6790.614
ZS’s method-----1.0000.9640.9840.8880.834
YS’s method------1.0000.9690.8790.854
OGRNVRS-------1.0000.9520.915
PGRNVRS--------1.0000.981
CGRNVRS---------1.000
Table 21. Sensitivity analysis for α .
Table 21. Sensitivity analysis for α .
α Ranking Results (OGRNVRS)Compromise Solution(s)
0.1 o 8 > o 3 > o 7 > o 6 > o 1 > o 5 > o 2 > o 4 o 8 , o 3
0.2 o 8 > o 3 > o 7 > o 6 > o 1 > o 2 > o 5 > o 4 o 8 , o 3
0.3 o 8 > o 3 > o 7 > o 6 > o 1 > o 2 > o 5 > o 4 o 8 , o 3
0.4 o 8 > o 3 > o 6 > o 7 > o 1 > o 5 > o 2 > o 4 o 8 , o 3
0.5 o 8 > o 3 > o 6 > o 7 > o 1 > o 5 > o 2 > o 4 o 8 , o 3
0.6 o 8 > o 3 > o 6 > o 7 > o 1 > o 5 > o 2 > o 4 o 8 , o 3
0.7 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.8 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.9 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
1.0 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
Table 22. Sensitivity analysis for β .
Table 22. Sensitivity analysis for β .
β Ranking Results (OGRNVRS)Compromise Solution(s)
0.0 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.1 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.2 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.3 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.4 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.5 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.6 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.7 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.8 o 8 > o 3 > o 7 > o 6 > o 1 > o 5 > o 2 > o 4 o 8 , o 3
0.9 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 1 , o 2 , o 3 , o 5 , o 6 , o 7 , o 8
Table 23. Sensitivity analysis for η .
Table 23. Sensitivity analysis for η .
η Ranking Results (CGRNVRS)Compromise Solution(s)
0.1 o 3 > o 8 > o 5 > o 7 > o 6 > o 1 > o 2 > o 4 o 3
0.2 o 8 > o 3 > o 6 > o 7 > o 4 > o 1 > o 2 > o 5 o 8
0.3 o 8 > o 3 > o 6 > o 7 > o 1 > o 2 > o 4 > o 5 o 8
0.4 o 8 > o 3 > o 6 > o 7 > o 1 > o 2 > o 5 > o 4 o 8
0.5 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8
0.6 o 8 > o 3 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4 o 8
0.7 o 8 > o 3 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4 o 8
0.8 o 8 > o 3 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4 o 8
0.9 o 8 > o 3 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4 o 8
1.0 o 8 > o 3 > o 6 > o 5 > o 7 > o 1 > o 2 > o 4 o 8
Table 24. Sensitivity analysis for k .
Table 24. Sensitivity analysis for k .
k Ranking Results (OGRNVRS)Compromise Solution(s)
0.0 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8
0.1 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8
0.2 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8
0.3 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8
0.4 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.5 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.6 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.7 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.8 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.9 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
1.0 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
Table 25. Sensitivity analysis for a.
Table 25. Sensitivity analysis for a.
aRanking Results (OGRNVRS)Compromise Solution(s)
0.0 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.1 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.2 o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.3 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.4 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.5 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.6 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.7 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.8 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
0.9 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
1.0 o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
Table 26. Sensitivity analysis for ⊕ and ⊖.
Table 26. Sensitivity analysis for ⊕ and ⊖.
( , ) Ranking Results (OGRNVRS)Compromise Solution(s)
( B , B ) o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
( 1 , 1 ) o 8 > o 3 > o 1 > o 2 > o 4 > o 5 > o 6 > o 7 o 8
( 2 , 2 ) o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8
( L , L ) o 8 > o 3 > o 1 > o 2 > o 4 > o 5 > o 6 > o 7 o 8
( M , M ) o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8 , o 1
( P , P ) o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 2 > o 4 o 8
Table 27. The 48 alternatives.
Table 27. The 48 alternatives.
U / A A 1 ˜ A 2 ˜ A 3 ˜ A 4 ˜ A 5 ˜ A 6 ˜ U / A A 1 ˜ A 2 ˜ A 3 ˜ A 4 ˜ A 5 ˜ A 6 ˜
o 1 24,00080002632816 o 25 400010003008364
o 2 24,00040004832824 o 26 16,0002000502416
o 3 32,00016,000302561624 o 27 8000200016032113
o 4 32,000200014032154 o 28 1000512240813
o 5 16,0004000381281632 o 29 30001000330024
o 6 32,000200050482652 o 30 16,00080005048110
o 7 32,000800026641216 o 31 40002000105838
o 8 64,0008000309612176 o 32 80002000923216
o 9 16,0005122000432 o 33 8000200011632528
o 10 5000256320416 o 34 80001000100026
o 11 26201310251311224 o 35 4000512250047
o 12 16,000200050836 o 36 64,00016,00023641632
o 13 12,00010001339312 o 37 40002000115215
o 14 800010001339312 o 38 24,00080003816048
o 15 800010001002426 o 39 32,0008000261282432
o 16 20,970524056301224 o 40 16,00020005024616
o 17 800020002006415 o 41 7681923006624
o 18 80003000758348 o 42 12,0007681806131
o 19 20002561750324 o 43 32,000200014032120
o 20 30007683000624 o 44 8000200014032154
o 21 30007683006624 o 45 200010001430516
o 22 4000512250017 o 46 400020001408120
o 23 32,00020005024626 o 47 16,0004000571612
o 24 800020002006415 o 48 24,000400057641216
Table 28. Alternatives ranking across increasing object sizes (8 to 48).
Table 28. Alternatives ranking across increasing object sizes (8 to 48).
DatasetsRanksCompromise Solution(s)
U 1 = { o 1 , o 2 , , o 8 } o 8 > o 3 > o 7 > o 6 > o 5 > o 1 > o 2 > o 4 o 8 , o 3
U 2 = { o 1 , o 2 , , o 16 } o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 16 > o 2 o 8 , o 3
> o 11 > o 4 > o 12 > o 13 > o 15 > o 9 > o 14 > o 10
U 3 = { o 1 , o 2 , , o 24 } o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 16 > o 2 o 8
> o 11 > o 23 > o 4 > o 18 > o 12 > o 13 > o 14 > o 15
> o 9 > o 17 > o 24 > o 19 > o 22 > o 20 > o 21 > o 10
U 4 = { o 1 , o 2 , , o 32 } o 8 > o 3 > o 6 > o 7 > o 5 > o 1 > o 16 > o 11 o 8
> o 2 > o 23 > o 30 > o 4 > o 18 > o 12 > o 26 > o 32
> o 15 > o 13 > o 31 > o 14 > o 27 > o 9 > o 17 > o 24
> o 19 > o 22 > o 28 > o 21 > o 25 > o 20 > o 10 > o 29
U 5 = { o 1 , o 2 , , o 40 } o 8 > o 3 > o 39 > o 6 > o 7 > o 36 > o 5 > o 38 o 8
> o 1 > o 16 > o 2 > o 11 > o 23 > o 30 > o 40 > o 4
> o 18 > o 12 > o 26 > o 33 > o 32 > o 15 > o 31 > o 13
> o 34 > o 14 > o 27 > o 17 > o 24 > o 9 > o 37 > o 19
> o 35 > o 22 > o 28 > o 25 > o 21 > o 20 > o 10 > o 29
U 6 = { o 1 , o 2 , , o 48 } o 8 > o 3 > o 36 > o 39 > o 6 > o 7 > o 5 > o 38 o 8
> o 1 > o 48 > o 16 > o 11 > o 2 > o 23 > o 30 > o 40
> o 47 > o 4 > o 18 > o 12 > o 26 > o 43 > o 33 > o 44
> o 32 > o 15 > o 31 > o 13 > o 34 > o 14 > o 27 > o 9
> o 42 > o 37 > o 17 > o 46 > o 24 > o 45 > o 19 > o 35
> o 22 > o 28 > o 25 > o 41 > o 21 > o 20 > o 10 > o 29
Table 29. Sorting results of o 8 in various datasets.
Table 29. Sorting results of o 8 in various datasets.
DatasetsRanksCompromise Solution(s)
S 1 = { o 8 , o 10 , o 11 , o 24 , o 40 } o 8 > o 11 > o 40 > o 10 > o 24 o 8
S 2 = { o 8 , o 13 , o 18 , o 26 , o 33 } o 8 > o 18 > o 26 > o 33 > o 13 o 8
S 3 = { o 8 , o 3 , o 31 , o 37 , o 43 } o 8 > o 3 > o 31 > o 37 > o 43 o 8
S 4 = { o 8 , o 15 , o 19 , o 22 , o 44 } o 8 > o 44 > o 15 > o 19 > o 22 o 8
S 5 = { o 8 , o 24 , o 28 , o 33 , o 44 } o 8 > o 33 > o 44 > o 24 > o 28 o 8
S 6 = { o 8 , o 26 , o 34 , o 41 , o 45 } o 8 > o 26 > o 45 > o 34 > o 41 o 8
S 7 = { o 8 , o 7 , o 18 , o 40 , o 46 } o 8 > o 7 > o 40 > o 18 > o 46 o 8
S 8 = { o 8 , o 13 , o 21 , o 34 , o 45 } o 8 > o 13 > o 21 > o 34 > o 45 o 8
S 9 = { o 8 , o 26 , o 32 , o 37 , o 40 } o 8 > o 40 > o 26 > o 32 > o 37 o 8
S 10 = { o 8 , o 21 , o 28 , o 37 , o 45 } o 8 > o 21 > o 28 > o 37 > o 45 o 8
S 11 = { o 8 , o 2 , o 24 , o 30 , o 43 } o 8 > o 2 > o 30 > o 24 > o 43 o 8
S 12 = { o 8 , o 11 , o 15 , o 25 , o 47 } o 8 > o 11 > o 47 > o 15 > o 25 o 8
S 13 = { o 8 , o 10 , o 24 , o 31 , o 44 } o 8 > o 44 > o 31 > o 10 > o 24 o 8
S 14 = { o 8 , o 5 , o 25 , o 43 , o 46 } o 8 > o 5 > o 43 > o 46 > o 25 o 8
S 15 = { o 8 , o 16 , o 19 , o 30 , o 43 } o 8 > o 16 > o 30 > o 43 > o 19 o 8
S 16 = { o 8 , o 14 , o 23 , o 33 , o 48 } o 8 > o 48 > o 33 > o 14 > o 23 o 8
S 17 = { o 8 , o 9 , o 15 , o 31 , o 47 } o 8 > o 47 > o 9 > o 15 > o 31 o 8
S 18 = { o 8 , o 18 , o 26 , o 40 , o 46 } o 8 > o 40 > o 18 > o 26 > o 46 o 8
S 19 = { o 8 , o 19 , o 25 , o 27 , o 37 } o 8 > o 27 > o 19 > o 25 > o 37 o 8
S 20 = { o 8 , o 11 , o 31 , o 39 , o 48 } o 8 > o 39 > o 11 > o 48 > o 31 o 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, X.; Xu, Y. Multi-Granulation Variable Precision Fuzzy Rough Set Based on Generalized Fuzzy Remote Neighborhood Systems and the MADM Application Design of a Novel VIKOR Method. Symmetry 2026, 18, 84. https://doi.org/10.3390/sym18010084

AMA Style

Mei X, Xu Y. Multi-Granulation Variable Precision Fuzzy Rough Set Based on Generalized Fuzzy Remote Neighborhood Systems and the MADM Application Design of a Novel VIKOR Method. Symmetry. 2026; 18(1):84. https://doi.org/10.3390/sym18010084

Chicago/Turabian Style

Mei, Xinyu, and Yaoliang Xu. 2026. "Multi-Granulation Variable Precision Fuzzy Rough Set Based on Generalized Fuzzy Remote Neighborhood Systems and the MADM Application Design of a Novel VIKOR Method" Symmetry 18, no. 1: 84. https://doi.org/10.3390/sym18010084

APA Style

Mei, X., & Xu, Y. (2026). Multi-Granulation Variable Precision Fuzzy Rough Set Based on Generalized Fuzzy Remote Neighborhood Systems and the MADM Application Design of a Novel VIKOR Method. Symmetry, 18(1), 84. https://doi.org/10.3390/sym18010084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop