Next Article in Journal
IntraClusTSP—An Incremental Intra-Cluster Refinement Heuristic Algorithm for Symmetric Travelling Salesman Problem
Next Article in Special Issue
The Complexity of Some Classes of Pyramid Graphs Created from a Gear Graph
Previous Article in Journal
1, 2, 3, Many—Perceptual Integration of Motif Repetitions
Previous Article in Special Issue
Maximum Detour–Harary Index for Some Graph Classes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets

1
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
2
Engineering Lab of Henan Province for Intelligence Business & Internet of Things, Henan Normal University, Xinxiang 453007, China
*
Authors to whom correspondence should be addressed.
Symmetry 2018, 10(11), 662; https://doi.org/10.3390/sym10110662
Submission received: 27 October 2018 / Revised: 11 November 2018 / Accepted: 16 November 2018 / Published: 21 November 2018
(This article belongs to the Special Issue Discrete Mathematics and Symmetry)

Abstract

:
The existing construction methods of granularity importance degree only consider the direct influence of single granularity on decision-making; however, they ignore the joint impact from other granularities when carrying out granularity selection. In this regard, we have the following improvements. First of all, we define a more reasonable granularity importance degree calculating method among multiple granularities to deal with the above problem and give a granularity reduction algorithm based on this method. Besides, this paper combines the reduction sets of optimistic and pessimistic multi-granulation rough sets with intuitionistic fuzzy sets, respectively, and their related properties are shown synchronously. Based on this, to further reduce the redundant objects in each granularity of reduction sets, four novel kinds of three-way decisions models with multi-granulation rough intuitionistic fuzzy sets are developed. Moreover, a series of concrete examples can demonstrate that these joint models not only can remove the redundant objects inside each granularity of the reduction sets, but also can generate much suitable granularity selection results using the designed comprehensive score function and comprehensive accuracy function of granularities.

1. Introduction

Pawlak [1,2] proposed rough sets theory in 1982 as a method of dealing with inaccuracy and uncertainty, and it has been developed into a variety of theories [3,4,5,6]. For example, the multi-granulation rough sets (MRS) model is one of the important developments [7,8]. The MRS can also be regarded as a mathematical framework to handle granular computing, which is proposed by Qian et al. [9]. Thereinto, the problem of granularity reduction is a vital research aspect of MRS. Considering the test cost problem of granularity structure selection in data mining and machine learning, Yang et al. constructed two reduction algorithms of cost-sensitive multi-granulation decision-making system based on the definition of approximate quality [10]. Through introducing the concept of distribution reduction [11] and taking the quality of approximate distribution as the measure in the multi-granulation decision rough sets model, Sang et al. proposed an α-lower approximate distribution reduction algorithm based on multi-granulation decision rough sets, however, the interactions among multiple granularities were not considered [12]. In order to overcome the problem of updating reduction, when the large-scale data vary dynamically, Jing et al. developed an incremental attribute reduction approach based on knowledge granularity with a multi-granulation view [13]. Then other multi-granulation reduction methods have been put forward one after another [14,15,16,17].
The notion of intuitionistic fuzzy sets (IFS), proposed by Atanassov [18,19], was initially developed in the framework of fuzzy sets [20,21]. Within the previous literature, how to get reasonable membership and non-membership functions is a key issue. In the interest of dealing with fuzzy information better, many experts and scholars have expanded the IFS model. Huang et al. combined IFS with MRS to obtain intuitionistic fuzzy MRS [22]. On the basis of fuzzy rough sets, Liu et al. constructed covering-based multi-granulation fuzzy rough sets [23]. Moreover, multi-granulation rough intuitionistic fuzzy cut sets model was structured by Xue et al. [24]. In order to reduce the classification errors and the limitation of ordering by single theory, they further combined IFS with graded rough sets theory based on dominance relation and extended them to a multi-granulation perspective. [25]. Under the optimistic multi-granulation intuitionistic fuzzy rough sets, Wang et al. proposed a novel method to solve multiple criteria group decision-making problems [26]. However, the above studies rarely deal with the optimal granularity selection problem in intuitionistic fuzzy environments. The measure of similarity between intuitionistic fuzzy sets is also one of the hot areas of research for experts, and some similarity measures about IFS are summarized in references [27,28,29], whereas these metric formulas cannot measure the importance degree of multiple granularities in the same IFS.
For further explaining the semantics of decision-theoretic rough sets (DTRS), Yao proposed a three-way decisions theory [30,31], which vastly pushed the development of rough sets. As a risk decision-making method, the key strategy of three-way decisions is to divide the domain into acceptance, rejection, and non-commitment. Up to now, researchers have accumulated a vast literature on its theory and application. For instance, in order to narrow the applications limits of three-way decisions model in uncertainty environment, Zhai et al. extended the three-way decisions models to tolerance rough fuzzy sets and rough fuzzy sets, respectively, the target concepts are relatively extended to tolerance rough fuzzy sets and rough fuzzy sets [32,33]. To accommodate the situation where the objects or attributes in a multi-scale decision table are sequentially updated, Hao et al. used sequential three-way decisions to investigate the optimal scale selection problem [34]. Subsequently, Luo et al. applied three-way decisions theory to incomplete multi-scale information systems [35]. With respect to multiple attribute decision-making, Zhang et al. study the inclusion relations of neutrosophic sets in their case in reference [36]. For improving the classification correct rate of three-way decisions, Zhang et al. proposed a novel three-way decisions model with DTRS by considering the new risk measurement functions through the utility theory [37]. Yang et al. combined three-way decisions theory with IFS to obtain novel three-way decision rules [38]. At the same time, Liu et al. explored the intuitionistic fuzzy three-way decision theory based on intuitionistic fuzzy decision systems [39]. Nevertheless, Yang et al. [38] and Liu et al. [39] only considered the case of a single granularity, and did not analyze the decision-making situation of multiple granularities in an intuitionistic fuzzy environment. The DTRS and three-way decisions theory are both used to deal with decision-making problems, so it is also enlightening for us to study three-way decisions theory through DTRS. An extension version that can be used to multi-periods scenarios has been introduced by Liang et al. using intuitionistic fuzzy decision- theoretic rough sets [40]. Furthermore, they introduced the intuitionistic fuzzy point operator into DTRS [41]. The three-way decisions are also applied in multiple attribute group decision making [42], supplier selection problem [43], clustering analysis [44], cognitive computer [45], and so on. However, they have not applied the three-way decisions theory to the optimal granularity selection problem. To solve this problem, we have expanded the three-way decisions models.
The main contributions of this paper include four points:
(1) The new granularity importance degree calculating methods among multiple granularities (i.e., s i g i n , Δ ( A i , A , D ) and s i g o u t , Δ ( A i , A , D ) ) are given respectively, which can generate more discriminative granularities.
(2) Optimistic optimistic multi-granulation rough intuitionistic fuzzy sets (OOMRIFS) model, optimistic pessimistic multi-granulation rough intuitionistic fuzzy sets (OIMRIFS) model, pessimistic optimistic multi-granulation rough intuitionistic fuzzy sets (IOMRIFS) model and pessimistic pessimistic multi-granulation rough intuitionistic fuzzy sets (IIMRIFS) model are constructed by combining intuitionistic fuzzy sets with the reduction of the optimistic and pessimistic multi-granulation rough sets. These four models can reduce the subjective errors caused by a single intuitionistic fuzzy set.
(3) We put forward four kinds of three-way decisions models based on the proposed four multi-granulation rough intuitionistic fuzzy sets (MRIFS), which can further reduce the redundant objects in each granularity of reduction sets.
(4) Comprehensive score function and comprehensive accuracy function based on MRIFS are constructed. Based on this, we can obtain the optimal granularity selection results.
The rest of this paper is organized as follows. In Section 2, some basic concepts of MRS, IFS, and three-way decisions are briefly reviewed. In Section 3, we propose two new granularity importance degree calculating methods and a granularity reduction Algorithm 1. At the same time, a comparative example is given. Four novel MRIFS models are constructed in Section 4, and the properties of the four models are verified by Example 2. Section 5 proposes some novel three-way decisions models based on above four new MRIFS, and the comprehensive score function and comprehensive accuracy function based on MRIFS are built. At the same time, through Algorithm 2, we make the optimal granularity selection. In Section 6, we use Example 3 to study and illustrate the three-way decisions models based on new MRIFS. Section 7 concludes this paper.

2. Preliminaries

The basic notions of MRS, IFS, and three-way decisions theory are briefly reviewed in this section. Throughout the paper, we denote U as a nonempty object set, i.e., the universe of discourse and A = { A 1 ,   A 2 ,   ,   A m } is an attribute set.
Definition 1
([9]). Suppose I S = < U , A , V , f > is a consistent information system, A = { A 1 ,   A 2 ,   ,   A m } is an attribute set. And R A i is an equivalence relation generated by A. [ x ] A i is the equivalence class of R A i , X U , the lower and upper approximations of optimistic multi-granulation rough sets (OMRS) of X are defined by the following two formulas:
i = 1 m A i ¯ O ( X ) = { x U | [ x ] A 1 X [ x ] A 2 X [ x ] A 3 X [ x ] A m X } ; i = 1 m A i ¯ O ( X ) = ~ ( i = 1 m A i ¯ O ( ~ X ) ) .
where is a disjunction operation, X is a complement of X, if i = 1 m A i ¯ O ( X ) i = 1 m A i ¯ O ( X ) , the pair ( i = 1 m A i ¯ O ( X ) ,   i = 1 m A i ¯ O ( X ) ) is referred to as an optimistic multi-granulation rough set of X.
Definition 2
([9]). Let I S = < U , A , V , f > be an information system, where A = { A 1 ,   A 2 ,   ,   A m } is an attribute set, and R A i is an equivalence relation generated by A. [ x ] A i is the equivalence class of R A i , X U , the pessimistic multi-granulation rough sets (IMRS) of X with respect to A are defined as follows:
i = 1 m A i ¯ I ( X ) = { x U | [ x ] A 1 X [ x ] A 2 X [ x ] A 3 X [ x ] A m X } ; i = 1 m A i ¯ I ( X ) = ~ ( i = 1 m A i ¯ I ( ~ X ) ) .  
where [ x ] A i ( 1 i m ) is equivalence class of x for A i , is a conjunction operation, if i = 1 m A i ¯ I ( X ) i = 1 m A i ¯ I ( X ) , the pair ( i = 1 m A i ¯ I ( X ) ,   i = 1 m A i ¯ I ( X ) ) is referred to as a pessimistic multi-granulation rough set of X.
Definition 3
([18,19]). Let U be a finite non-empty universe set, then the IFS E in U are denoted by:
E = { < x ,   μ E ( x ) , ν E ( x ) > | x U } ,
where μ E ( x ) : U [ 0 , 1 ] and ν E ( x ) : U [ 0 , 1 ] . μ E ( x ) and ν E ( x ) are called membership and non-mem- bership functions of the element x in E with 0 μ E ( x ) + ν E ( x ) 1 . For   x U , the hesitancy degree function is defined as π E ( x ) = 1 μ E ( x ) ν E ( x ) , obviously, π E ( x ) :   U [ 0 , 1 ] . Suppose   E 1 ,   E 2 I F S ( U ) , the basic operations of E 1 and E 2 are given as follows:
(1)
E 1 E 2 μ E 1 ( x ) μ E 2 ( x ) ,   ν E 1 ( x ) ν E 2 ( x ) ,   x U ;
(2)
A = B μ A ( x ) = μ B ( x ) ,   ν A ( x ) = ν B ( x ) ,   x U ;
(3)
E 1 E 2 = { < x ,   max { μ E 1 ( x ) ,   μ E 2 ( x ) } ,   min { ν E 1 ( x ) ,   ν E 2 ( x ) } > | x U } ;
(4)
( 4 )   E 1 E 2 = { < x ,   min { μ E 1 ( x ) ,   μ E 2 ( x ) } ,   max { ν E 1 ( x ) ,   ν E 2 ( x ) } > | x U } ;
(5)
( 5 )   E 1 = { < x ,   ν E 1 ( x ) ,   μ E 1 ( x ) > | x U } .
Definition 4
([30,31]). Let U = { x 1 , x 2 , , x n } be a universe of discourse, ξ = { ω P , ω N , ω B } represents the decisions of dividing an object x into receptive P O S ( X ) , rejective N E G ( X ) , and boundary regions B N D ( X ) , respectively. The cost functions λ P P , λ N P and λ B P are used to represent the three decision- making costs of x U , and the cost functions λ P N , λ N N and λ B N are used to represent the three decision-making costs of x U , as shown in Table 1.
According to the minimum-risk principle of Bayesian decision procedure, three-way decisions rules can be obtained as follows:
(P): If P ( X | [ x ] ) α , then x P O S ( X ) ;
(N): If P ( X | [ x ] ) β , then x N E G ( X ) ;
(B): If β < P ( X | [ x ] ) < α , then x B N D ( X ) .
Here α , β and γ represent respectively:
α = λ P N λ B N ( λ P N λ B N ) + ( λ B P λ P P ) ;  
β = λ B N λ N N ( λ B N λ N N ) + ( λ N P λ B P ) ;  
γ = λ P N λ N N ( λ P N λ N N ) + ( λ N P λ P P ) .  

3. Granularity Reduction Algorithm Derives from Granularity Importance Degree

Definition 5
([10,12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of condition attributes C. U / D = { X 1 , X 2 , , X s } is the partition induced by the decision attributes D, then approximation quality of U / D about granularity set A is defined as:
γ ( A , D ) = | { i = 1 m A i ¯ Δ ( X t ) | 1 t s } | | U | .  
where | X | denotes the cardinal number of set X. Δ { O , I } represents two cases of optimistic and pessimistic multi-granulation rough sets, the same as the following.
Definition 6
([12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A , X U / D ,
(1) If i = 1 , A i A m A i ¯ Δ ( X ) i = 1 , A i A A m A i ¯ Δ ( X ) , then A is important in A for X;
(2) If i = 1 , A i A m A i ¯ Δ ( X ) = i = 1 , A i A A m A i ¯ Δ ( X ) , then A is not important in A for X.
Definition 7
([10,12]). Suppose D I S = ( U , C D , V , f ) is a decision information system, A = { A 1 , A 2 , , A m } are m sub-attributes of C, A A . A i A , on the granularity sets A , the internal importance degree of Ai for D can be defined as follows:
s i g i n Δ ( A i , A , D ) =   | γ ( A , D ) γ ( A { A i } , D ) | .  
Definition 8
([10,12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , , A m } are m sub-attributes of C, A A . A i A A , on the granularity sets A , the external importance degree of Ai for D can be defined as follows:
s i g o u t Δ ( A i , A , D ) =   | γ ( A i A , D ) γ ( A , D ) | .  
Theorem 1.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A .
(1) For A i A , on the basis of attribute subset family A , the granularity importance degree of A i in A with respect to D is expressed as follows:
s i g i n Δ ( A i , A , D ) = 1 m 1 | s i g i n Δ ( { A k , A i } , A , D ) s i g i n Δ ( A k , A { A i } , D ) | .  
where 1 k m , k i , the same as the following.
(2) For A i A A , on the basis of attribute subset family A , the granularity importance degree of A i in A A with respect to D, we have:
s i g o u t Δ ( A i , A , D ) = 1 m 1 | s i g o u t Δ ( { A k , A i } , { A i } A , D ) s i g o u t Δ ( A k , A , D ) | .  
Proof. 
(1) According to Definition 7, then
s i g i n Δ ( A i , A , D ) = | γ ( A , D ) γ ( A { A i } , D ) | = m 1 m 1 | γ ( A , D ) γ ( A { A i } , D ) | + | γ ( A { A k , A i } , D ) γ ( A { A k , A i } , D ) | = 1 m 1 ( | γ ( A , D ) γ ( A { A k , A i } , D ) ( γ ( A { A i } , D ) γ ( A { A k , A i } , D ) | ) = 1 m 1 | s i g i n Δ ( { A k , A i } , A , D ) s i g i n Δ ( A k , A { A i } , D ) | .
(2) According to Definition 8, we can get:
s i g o u t Δ ( A i , A , D ) = | γ ( { A i } A , D ) γ ( A , D ) | = m 1 m 1 | γ ( { A i } A , D ) γ ( A , D ) | | γ ( A { A k } , D ) γ ( A { A k } , D ) | = 1 m 1 ( | γ ( { A i } A , D ) γ ( A { A k } , D ) | | ( γ ( A { A k } , D ) γ ( A , D ) | ) = 1 m 1 | s i g o u t Δ ( { A k , A i } , { A i } A , D ) s i g o u t Δ ( A k , A , D ) | .
 □
In Definitions 7 and 8, only the direct effect of a single granularity on the whole granularity sets is given, without considering the indirect effect of the remaining granularities on decision-making. The following Definitions 9 and 10 synthetically analyze the interdependence between multiple granularities and present two new methods for calculating granularity importance degree.
Definition 9.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A . A i , A k A , on the attribute subset family, A, the new internal importance degree of Ai relative to D is defined as follows:
s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) + 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | .  
s i g i n Δ ( A i , A , D ) and 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | respectively indicate the direct and indirect effects of granularity Ai on decision-making. When | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | > 0 is satisfied, it is shown that the granularity importance degree of Ak is increased by the addition of Ai in attribute subset A { A i } , so the granularity importance degree of Ak should be added to Ai. Therefore, when there are m sub-attributes, we should add 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | to the granularity importance degree of Ai.
If | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | = 0 and k i , then it shows that there is no interaction between granularity Ai and other granularities, which means s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) .
Definition 10.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of C, A A . A i A A , the new external importance degree of Ai relative to D is defined as follows:
s i g o u t , Δ ( A i , A , D ) = s i g o u t Δ ( A i , A , D ) + 1 m 1 | s i g o u t Δ ( A k , A , D ) s i g o u t Δ ( A k , { A i } A , D ) | .  
Similarly, the new external importance degree calculation formula has a similar effect.
Theorem 2.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of C, A A , A i A . The improved internal importance can be rewritten as:
s i g i n , Δ ( A i , A , D ) = 1 m 1 s i g i n Δ ( A i , A { A k } , D ) .  
Proof. 
s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) + 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | = m 1 m 1 | γ ( A , D ) γ ( A { A i } , D ) | + 1 m 1 | | γ ( A { A i } , D ) γ ( A { A k , A i } , D ) | | γ ( A , D ) γ ( A { A k } , D ) | | = 1 m 1 | γ ( A { A k } , D ) γ ( A { A k , A i } , D ) | = 1 m 1 s i g i n Δ ( A i , A { A k } , D ) .
 □
Theorem 3.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A . The improved external importance can be expressed as follows:
s i g o u t , Δ ( A i , A , D ) = 1 m 1 s i g o u t Δ ( A i , { A k } A , D ) .  
Proof. 
s i g o u t , Δ ( A i , A , D ) = s i g o u t Δ ( A i , A , D ) + 1 m 1 | ( s i g o u t Δ ( A k , A , D ) s i g o u t Δ ( A k , { A i } A , D ) ) | = m 1 m 1 | γ ( { A i } A , D ) γ ( A , D ) | + 1 m 1 | | γ ( A , D ) γ ( { A k } A , D ) | | γ ( { A i } A , D ) | | = 1 m 1 | γ ( { A i , A k } A , D ) γ ( { A i } A , D ) | = 1 m 1 s i g o u t Δ ( A i , { A k } A , D ) .
 □
Theorems 2 and 3 show that when s i g i n Δ ( A i , A { A k } , D ) = 0   ( s i g o u t Δ ( A i , { A k } A , D ) = 0 ) is satisfied, having s i g i n , Δ ( A i , A , D ) = 0   ( s i g o u t , Δ ( A i , A , D ) = 0 ) . And each granularity importance degree is calculated on the basis of removing Ak from A , which makes it more convenient for us to choose the required granularity.
According to [10,12], we can get optimistic and pessimistic multi-granulation lower approximations L O and L I . The granularity reduction algorithm based on improved granularity importance degree is derived from Theorems 2 and 3, as shown in Algorithm 1.
Algorithm 1. Granularity reduction algorithm derives from granularity importance degree
Input: D I S = ( U , C D , V , f ) , A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A , A i A , U / D = { X 1 , X 2 , , X s } ;
Output: A granularity reduction set A i Δ of this information system.
1: set up A i Δ ϕ , 1 h m ;
2: compute U / D , optimistic and pessimistic multi-granulation lower approximations L Δ ;
3: for A i A
4:  compute s i g i n , Δ ( A i , A , D ) via Definition 9;
5:  if ( s i g i n , Δ ( A i , A , D ) > 0 ) then A i Δ = A i Δ A i ;
6:  end
7:  for A i A A i Δ
8:    if γ ( A i Δ , D ) = γ ( A , D ) then compute s i g o u t , Δ ( A i , A , D ) via Definition 10;
9:    end
10:   if s i g o u t , Δ ( A h , A , D ) = max { s i g o u t , Δ ( A h , A , D ) } then A i Δ = A i Δ A h ;
11:   end
12: end
13: for A i A i Δ ,
14:   if γ ( A i Δ A i , D ) = γ ( A , D ) then A i Δ = A i Δ A i ;
15:   end
16: end
17: return granularity reduction set A i Δ ;
18: end
Therefore, we can obtain two reductions by utilizing Algorithm 1.
Example 1.
This paper calculates the granularity importance of 10 on-line investment schemes given in Reference [12]. After comparing and analyzing the obtained granularity importance degree, we can obtain the reduction results of 5 evaluation sites through Algorithm 1, and the detailed calculation steps are as follows.
According to [12], we can get A = { A 1 , A 2 , A 3 , A 4 , A 5 } ,   A A ,   U / D = { { x 1 , x 2 , x 4 , x 6 , x 8 } , { x 3 , x 5 , x 7 , x 9 , x 10 } } .
(1)
Reduction set of OMRS
First of all, we can calculate the internal importance degree of OMRS by Theorem 2 as shown in Table 2.
Then, according to Algorithm 1, we can deduce the initial granularity set is { A 1 , A 2 , A 3 } . Inspired by Definition 5, we obtain r O ( { A 2 , A 3 } , D ) = r O ( A , D ) = 1 . So, the reduction set of the OMRS is A i O = { A 2 , A 3 } .
As shown in Table 2, when using the new method to calculate internal importance degree, more discriminative granularities can be generated, which are more convenient for screening out the required granularities. In literature [12], the approximate quality of granularity A2 in the reduction set is different from that of the whole granularity set, so it is necessary to calculate the external importance degree again. When calculating the internal and external importance degree, References [10,12] only considered the direct influence of the single granularity on the granularity A2, so the influence of the granularity A2 on the overall decision-making can’t be fully reflected.
(2)
Reduction set of IMRS
Similarly, by using Theorem 2, we can get the internal importance degree of each site under IMRS, as shown in Table 3.
According to Algorithm 1, the sites 2, 4, and 5 with internal importance degrees greater than 0, which are added to the granularity reduction set as the initial granularity set, and then the approximate quality of it can be calculated as follows:
r I ( { A 2 , A 4 } , D ) = r I ( { A 4 , A 5 } , D ) = r I ( A , D ) = 0.2.  
Namely, the reduction set of IMRS is A i I = { A 2 , A 4 } or A i I = { A 4 , A 5 } without calculating the external importance degree.
In this paper, when calculating the internal and external importance degree of each granularity, the influence of removing other granularities on decision-making is also considered. According to Theorem 2, after calculating the internal importance degree of OMRS and IMRS, if the approximate quality of each granularity in the reduction sets are the same as the overall granularities, it is not necessary to calculate the external importance degree again, which can reduce the amount of computation.

4. Novel Multi-Granulation Rough Intuitionistic Fuzzy Sets Models

In Example 1, two reduction sets are obtained under IMRS, so we need a novel method to obtain more accurate granularity reduction results by calculating granularity reduction.
In order to obtain the optimal determined site selection result, we combine the optimistic and pessimistic multi-granulation reduction sets based on Algorithm 1 with IFS, respectively, and construct the following four new MRIFS models.
Definition 11
([22,25]). Suppose I S = ( U , A , V , f ) is an information system, A = { A 1 ,   A 2 , ,   A m } . E U , E are IFS. Then the lower and upper approximations of optimistic MRIFS of A i are respectively defined by:
i = 1 m R A i ¯ O ( E ) = { < x ,   μ i = 1 m R A i ¯ O ( E ) ( x ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) > | x U } ; i = 1 m R A i ¯ O ( E ) = { < x ,   μ i = 1 m R A i ¯ O ( E ) ( x ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m inf y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m sup y [ x ] A i ν E ( y ) ; μ i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m sup y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m inf y [ x ] A i ν E ( y ) .
where R A i is an equivalence relation of x in A, [ x ] A i is the equivalence class of R A i ,and is a disjunction operation.
Definition 12
([22,25]). Suppose I S = < U , A , V , f > is an information system, A = { A 1 ,   A 2 , ,   A m } . E U , E are IFS. Then the lower and upper approximations of pessimistic MRIFS of Ai can be described as follows:
i = 1 m R A i ¯ I ( E ) = { < x ,   μ i = 1 m R A i ¯ I ( E ) ( x ) ,   ν i = 1 m R A i ¯ I ( E ) ) ( x ) > | x U } ; i = 1 m R A i ¯ I ( E ) = { < x ,   μ i = 1 m R A i ¯ I ( E ) ( x ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) > | x U } .
where
μ i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m inf y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m sup y [ x ] A i ν E ( y ) ; μ i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m sup y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m inf y [ x ] A i ν E ( y ) .
where [ x ] A i is the equivalence class of x about the equivalence relation R A i , and is a conjunction operation.
Definition 13.
Suppose I S = < U , A , V , f > is an information system, A i O = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } . And R A i O is an equivalence relation of x with respect to the attribute reduction set A i O under OMRS, [ x ] A i O is the equivalence class of R A i O . Let E be IFS of U and they can be characterized by a pair of lower and upper approximations:
i = 1 r R A i O ¯ O ( E ) = { < x , μ i = 1 r R A i O ¯ O ( E ) ( x ) , ν i = 1 r R A i O ¯ O ( E ) ( x ) > | x U } ; i = 1 r R A i O ¯ O ( E ) = { < x , μ i = 1 r R A i O ¯ O ( E ) ( x ) , ν i = 1 r R A i O ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) ; μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) .  
If i = 1 r R A i O ¯ O ( E ) i = 1 r R A i O ¯ O ( E ) , then E can be called OOMRIFS.
Definition 14.
Suppose I S = < U , A , V , f > is an information system, E U , E are IFS. A i O = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } . where A i O is an optimistic multi-granulation attribute reduction set. Then the lower and upper approximations of pessimistic MRIFS under optimistic multi-granulation environment can be defined as follows:
i = 1 r R A i O ¯ I ( E ) = { < x , μ i = 1 r R A i O ¯ I ( E ) ( x ) , ν i = 1 r R A i O ¯ I ( E ) ( x ) > | x U } ; i = 1 r R A i O ¯ I ( E ) = { < x , μ i = 1 r R A i O ¯ I ( E ) ( x ) , ν i = 1 r R A i O ¯ I ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) ; μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) .
The pair ( i = 1 r R A i O ¯ I ( E ) , i = 1 r R A i O ¯ I ( E ) ) are called OIMRIFS, if i = 1 r R A i O ¯ I ( E ) i = 1 r R A i O ¯ I ( E ) .
According to Definitions 13 and 14, the following theorem can be obtained.
Theorem 4.
Let I S = < U , A , V , f > be an information system, A i O = { A 1 , A 2 , , A r } A ,   A = { A 1 , A 2 , , A m } , and E 1 , E 2 be IFS on U. Comparing with Definitions 13 and 14, the following proposition is obtained.
(1)
i = 1 r R A i O ¯ O ( E 1 ) = i = 1 r R A i O ¯ O ( E 1 ) ;
(2)
i = 1 r R A i O ¯ O ( E 1 ) = i = 1 r R A i O ¯ O ( E 1 ) ;
(3)
i = 1 r R A i O ¯ I ( E 1 ) = i = 1 r R A i O ¯ I ( E 1 ) ;
(4)
i = 1 r R A i O ¯ I ( E 1 ) = i = 1 r R A i O ¯ I ( E 1 ) ;
(5)
i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ O ( E 1 ) ;
(6)
i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ I ( E 1 ) ;
(7)
i = 1 r R A i O ¯ O ( E 1 E 2 ) = i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) = i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(8)
i = 1 r R A i O ¯ O ( E 1 E 2 ) = i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) = i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(9)
i = 1 r R A i O ¯ O ( E 1 E 2 ) i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(10)
i = 1 r R A i O ¯ O ( E 1 E 2 ) i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) .
Proof. 
It is easy to prove by the Definitions 13 and 14. □
Definition 15.
Let I S = < U , A , V , f > be an information system, and E be IFS on U. A i I = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } , where A i I is a pessimistic multi-granulation attribute reduction set. Then, the pessimistic optimistic lower and upper approximations of E with respect to equivalence relation R A i I are defined by the following formulas:
i = 1 r R A i I ¯ O ( E ) = { < x , μ i = 1 r R A i I ¯ O ( E ) ( x ) , ν i = 1 r R A i I ¯ O ( E ) ( x ) > | x U } ; i = 1 r R A i I ¯ O ( E ) = { < x , μ i = 1 r R A i I ¯ O ( E ) ( x ) , ν i = 1 r R A i I ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) ; μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) .  
If i = 1 r R A i I ¯ O ( E ) i = 1 r R A i I ¯ O ( E ) , then E can be called IOMRIFS.
Definition 16.
Let I S = < U , A , V , f > be an information system, and E be IFS on U. A i I = { A 1 , A 2 , , A r } A ,   A = { A 1 ,   A 2 , ,   A m } , where A i I is a pessimistic multi-granulation attribute reduction set. Then, the pessimistic lower and upper approximations of E under IMRS are defined by the following formulas:
i = 1 r R A i I ¯ I ( E ) = { < x , μ i = 1 r R A i I ¯ I ( E ) ( x ) , ν i = 1 r R A i I ¯ I ( E ) ( x ) > | x U } ; i = 1 r R A i I ¯ I ( E ) = { < x , μ i = 1 r R A i I ¯ I ( E ) ( x ) , ν i = 1 r R A i I ¯ I ( E ) ( x ) > | x U } .  
where
μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) ;   μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) .  
where R A i I is an equivalence relation of x about the attribute reduction set A i I under IMRS, [ x ] A i O is the equivalence class of R A i I .
If i = 1 r R A i I ¯ I ( E ) i = 1 r R A i I ¯ I ( E ) , then the pair ( i = 1 r R A i I ¯ I ( E ) , i = 1 r R A i I ¯ I ( E ) ) is said to be IIMRIFS.
According to Definitions 15 and 16, the following theorem can be captured.
Theorem 5.
Let I S = < U , A , V , f > be an information system, A i I = { A 1 , A 2 , , A r } A ,   A = { A 1 , A 2 , , A m } , and E 1 , E 2 be IFS on U. Then IOMRIFS and IIOMRIFS models have the following properties:
(1)
i = 1 r R A i I ¯ O ( E 1 ) = i = 1 r R A i I ¯ O ( E 1 ) ;
(2)
i = 1 r R A i I ¯ O ( E 1 ) = i = 1 r R A i I ¯ O ( E 1 ) ;
(3)
i = 1 r R A i I ¯ I ( E 1 ) = i = 1 r R A i I ¯ I ( E 1 ) ;
(4)
i = 1 r R A i I ¯ I ( E 1 ) = i = 1 r R A i I ¯ I ( E 1 ) ;
(5)
i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ O ( E 1 ) ;
(6)
i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ I ( E 1 ) .
(7)
i = 1 r R A i I ¯ O ( E 1 E 2 ) = i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) = i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(8)
i = 1 r R A i I ¯ O ( E 1 E 2 ) = i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) = i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(9)
i = 1 r R A i I ¯ O ( E 1 E 2 ) i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(10)
i = 1 r R A i I ¯ O ( E 1 E 2 ) i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) .
Proof. 
It can be derived directly from Definitions 15 and 16. □
The characteristics of the proposed four models are further verified by Example 2 below.
Example 2.
(Continued with Example 1). From Example 1, we know that these 5 sites are evaluated by 10 investment schemes respectively. Suppose they have the following IFS with respect to 10 investment schemes
E = { [ 0.25 , 0.43 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.38 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.49 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(1) In OOMRIFS, the lower and upper approximations of OOMRIFS can be calculated as follows:
i = 1 r R A i O ¯ O ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.49 , 0.38 ] x 2 , [ 0.49 , 0.38 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.49 , 0.38 ] x 5 , [ 0.25 , 0.46 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.15 , 0.46 ] x 9 , [ 0.67 , 0.23 ] x 10 } , i = 1 r R A i O ¯ O ( E ) = { [ 0.51 , 0.28 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.35 ] x 3 , [ 0.51 , 0.28 ] x 4 , [ 0.54 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.54 , 0.35 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(2) Similarly, in OIMRIFS, we have:
i = 1 r R A i O ¯ I ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.25 , 0.59 ] x 2 , [ 0.09 , 0.86 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.09 , 0.86 ] x 5 , [ 0.15 , 0.59 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.09 , 0.86 ] x 9 , [ 0.09 , 0.86 ] x 10 } , i = 1 r R A i O ¯ I ( E ) = { [ 0.92 , 0.04 ] x 1 , [ 0.54 , 0.28 ] x 2 , [ 0.54 , 0.28 ] x 3 , [ 0.92 , 0.04 ] x 4 , [ 0.54 , 0.28 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.72 , 0.12 ] x 7 , [ 0.92 , 0.04 ] x 8 , [ 0.92 , 0.04 ] x 9 , [ 0.72 , 0.12 ] x 10 } .
From the above results, Figure 1 can be drawn as follows:
Note that
μ 1 = μ O O ¯ ( x j ) and ν 1 = ν O O ¯ ( x j ) represent the lower approximation of OOMRIFS;
μ 2 = μ O O ¯ ( x j ) and ν 2 = ν O O ¯ ( x j ) represent the upper approximation of OOMRIFS;
μ 3 = μ O I ¯ ( x j ) and ν 3 = ν O I ¯ ( x j ) represent the lower approximation of OIMRIFS;
μ 4 = μ O I ¯ ( x j ) and ν 4 = ν O I ¯ ( x j ) represent the upper approximation of OIMRIFS.
Regarding Figure 1, we can get,
μ O I ¯ ( x j ) μ O O ¯ ( x j ) μ O O ¯ ( x j ) μ O I ¯ ( x j ) ;   ν O I ¯ ( x j ) ν O O ¯ ( x j ) ν O O ¯ ( x j ) ν O I ¯ ( x j ) .
As shown in Figure 1, the rules of Theorem 4 are satisfied. By constructing the OOMRIFS and OIMRIFS models, we can reduce the subjective scoring errors of experts under intuitionistic fuzzy conditions.
(3) Similar to (1), in IOMRIFS, we have:
i = 1 r R A i I ¯ O ( E ) = { [ 0.25 , 0.43 ] x 1 , [ 0.25 , 0.43 ] x 2 , [ 0.25 , 0.43 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.25 , 0.43 ] x 5 , [ 0.25 , 0.46 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.67 , 0.23 ] x 9 , [ 0.67 , 0.23 ] x 10 } ,
i = 1 r R A i I ¯ O ( E ) = { [ 0.51 , 0.28 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.35 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.49 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.51 , 0.35 ] x 7 , [ 0.49 , 0.35 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(4) The same as (1), in IIMRIFS, we can get:
i = 1 r R A i I ¯ I ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.09 , 0.86 ] x 2 , [ 0.09 , 0.86 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.09 , 0.86 ] x 5 , [ 0.09 , 0.86 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.09 , 0.86 ] x 8 , [ 0.15 , 0.46 ] x 9 , [ 0.67 , 0.23 ] x 10 } ,
i = 1 r R A i I ¯ I ( E ) = { [ 0.92 , 0.04 ] x 1 , [ 0.54 , 0.28 ] x 2 , [ 0.92 , 0.04 ] x 3 , [ 0.92 , 0.04 ] x 4 , [ 0.54 , 0.28 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.92 , 0.04 ] x 7 , [ 0.92 , 0.04 ] x 8 , [ 0.92 , 0.04 ] x 9 , [ 0.72 , 0.12 ] x 10 } .
From (3) and (4), we can obtain Figure 2 as shown:
Note that
μ 5 = μ I O ¯ ( x j ) and ν 5 = ν I O ¯ ( x j ) represent the lower approximation of IOMRIFS;
μ 6 = μ I O ¯ ( x j ) and ν 6 = ν I O ¯ ( x j ) represent the upper approximation of IOMRIFS;
μ 7 = μ I I ¯ ( x j ) and ν 7 = ν I I ¯ ( x j ) represent the lower approximation of IIMRIFS;
μ 8 = μ I I ¯ ( x j ) and ν 8 = ν I I ¯ ( x j ) represent the upper approximation of IIMRIFS.
For Figure 2, we can get,
μ I I ¯ ( x j ) μ I O ¯ ( x j ) μ I O ¯ ( x j ) μ I I ¯ ( x j ) ;   ν I I ¯ ( x j ) ν I O ¯ ( x j ) ν I O ¯ ( x j ) ν I I ¯ ( x j ) .
As shown in Figure 2, the rules of Theorem 5 are satisfied.
Through the Example 2, we can obtain four relatively more objective MRIFS models, which are beneficial to reduce subjective errors.

5. Three-Way Decisions Models Based on MRIFS and Optimal Granularity Selection

In order to obtain the optimal granularity selection results in the case of optimistic and pessimistic multi-granulation sets, it is necessary to further distinguish the importance degree of each granularity in the reduction sets. We respectively combine the four MRIFS models mentioned above with three-way decisions theory to get four new three-way decisions models. By extracting the rules, the redundant objects in the reduction sets are removed, and the decision error is further reduced. Then the optimal granularity selection results in two cases are obtained respectively by constructing the comprehensive score function and comprehensive accuracy function measurement formulas of each granularity of the reduction sets.

5.1. Three-Way Decisions Model Based on OOMRIFS

Suppose A i O is the reduction set under OMRS. According to reference [46], the expected loss function R O O ( ω | [ x ] A i O ) ( = P , B , N ) of object x can be obtained:
R O O ( ω P | [ x ] A i O ) = λ P P μ O O ( x ) + λ P N ν O O ( x ) + λ P B π O O ( x ) ; R O O ( ω N | [ x ] A i O ) = λ N P μ O O ( x ) + λ N N ν O O ( x ) + λ N B π O O ( x ) ; R O O ( ω B | [ x ] A i O ) = λ B P μ O O ( x ) + λ B N ν O O ( x ) + λ B B π O O ( x ) .
where
μ O O ( x ) = μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) , ν O O ( x ) = ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) , π O O ( x ) = 1 μ i = 1 r R A i O ¯ O ( E ) ( x ) ν i = 1 r R A i O ¯ O ( E ) ( x ) ;
or
μ O O ( x ) = μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) , ν O O ( x ) = ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) , π O O ( x ) = 1 μ i = 1 r R A i O ¯ O ( E ) ( x ) ν i = 1 r R A i O ¯ O ( E ) ( x ) .
The minimum-risk decision rules derived from the Bayesian decision process are as follows:
( P ) : If R ( ω P | [ x ] A i O ) R ( ω B | [ x ] A i O ) and R ( ω P | [ x ] A i O ) R ( ω N | [ x ] A i O ) , then x P O S ( X ) ;
( N ) : If R ( ω N | [ x ] A i O ) R ( ω P | [ x ] A i O ) and R ( ω N | [ x ] A i O ) R ( ω B | [ x ] A i O ) , then x N E G ( X ) ;
( B ) : If R ( ω B | [ x ] A i O ) R ( ω N | [ x ] A i O ) and R ( ω B | [ x ] A i O ) R ( ω P | [ x ] A i O ) , then x B N D ( X ) .
Thus, the decision rules ( P ) - ( B ) can be re-expressed concisely as:
( P ) rule satisfies:
( μ O O ( x ) ( 1 π O O ( x ) ) λ N N λ P N ( λ P P λ N P ) + ( λ P N λ N N ) ) ( μ O O ( x ) ( 1 π O O ( x ) ) λ B N λ P N ( λ P P λ B P ) + ( λ P N λ B N ) ) ;  
( N ) rule satisfies:
( μ O O ( x ) < ( 1 π O O ( x ) ) λ P N λ N N ( λ N P λ P P ) + ( λ P N λ N N ) ) ( μ O O ( x ) < ( 1 π O O ( x ) ) λ B N λ N N ( λ N P λ B P ) + ( λ B N λ N N ) ) ;  
( B ) rule satisfies:
( μ O O ( x ) > ( 1 π O O ( x ) ) λ B N λ P N ( λ P N λ B N ) + ( λ B P λ P P ) ) ( μ O O ( x ) ( 1 π O O ( x ) ) λ B N λ N N ( λ B N λ N N ) + ( λ N P λ B P ) ) .  
Therefore, the three-way decisions rules based on OOMRIFS are as follows:
(P1): If μ O O ( x ) ( 1 π O O ( x ) ) α , then x P O S ( X ) ;
(N1): If μ O O ( x ) ( 1 π O O ( x ) ) β , then x N E G ( X ) ;
(B1): If ( 1 π O O ( x ) ) β μ O O ( x ) and μ O O ( x ) ( 1 π O O ( x ) ) α , then x B N D ( X ) .

5.2. Three-Way Decisions Model Based on OIMRIFS

Suppose A i O is the reduction set under OMRS. According to reference [46], the expected loss functions R O O ( ω | [ x ] A i O ) ( = P , B , N ) of an object x are presented as follows:
R O I ( ω P | [ x ] A i O ) = λ P P μ O I ( x ) + λ P N ν O I ( x ) + λ P B π O I ( x ) ; R O I ( ω N | [ x ] A i O ) = λ N P μ O I ( x ) + λ N N ν O I ( x ) + λ N B π O I ( x ) ; R O I ( ω B | [ x ] A i O ) = λ B P μ O I ( x ) + λ B N ν O I ( x ) + λ B B π O I ( x ) .
where
μ O I ( x ) = μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) , ν O I ( x ) = v i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O v E ( y ) , π O I ( x ) = 1 μ i = 1 r R A i O ¯ I ( E ) ( x ) v i = 1 r R A i O ¯ I ( E ) ( x ) ;
or
μ O I ( x ) = μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) , ν O I ( x ) = ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) , π O I ( x ) = 1 μ i = 1 r R A i O ¯ I ( E ) ( x ) ν i = 1 r R A i O ¯ I ( E ) ( x ) .
Therefore, the three-way decisions rules based on OIMRIFS are as follows:
(P2): If μ O I ( x ) ( 1 π O I ( x ) ) α , then x P O S ( X ) ;
(N2): If μ O I ( x ) ( 1 π O I ( x ) ) β , then x N E G ( X ) ;
(B2): If ( 1 π O I ( x ) ) β μ O I ( x ) and μ O I ( x ) ( 1 π O I ( x ) ) α , then x B N D ( X ) .

5.3. Three-Way Decisions Model Based on IOMRIFS

Suppose A i I is the reduction set under IMRS. According to reference [46], the expected loss functions R I O ( ω | [ x ] A i I ) ( = P , B , N ) of an object x are as follows:
R I O ( ω P | [ x ] A i I ) = λ P P μ I O ( x ) + λ P N ν I O ( x ) + λ P B π I O ( x ) ; R I O ( ω N | [ x ] A i I ) = λ N P μ I O ( x ) + λ N N ν I O ( x ) + λ N B π I O ( x ) ; R I O ( ω B | [ x ] A i I ) = λ B P μ I O ( x ) + λ B N ν I O ( x ) + λ B B π I O ( x ) .
where
μ I O ( x ) = μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) , ν I O ( x ) = ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) , π I O ( x ) = 1 μ i = 1 r R A i I ¯ O ( E ) ( x ) ν i = 1 r R A i I ¯ O ( E ) ( x ) ;
or
μ I O ( x ) = μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) , ν I O ( x ) = ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) , π I O ( x ) = 1 μ i = 1 r R A i I ¯ O ( E ) ( x ) ν i = 1 r R A i I ¯ O ( E ) ( x ) .
Therefore, the three-way decisions rules based on IOMRIFS are as follows:
(P3): If μ I O ( x ) ( 1 π I O ( x ) ) α , then x P O S ( X ) ;
(N3): If μ I O ( x ) ( 1 π I O ( x ) ) β , then x N E G ( X ) ;
(B3): If ( 1 π I O ( x ) ) β μ I O ( x ) and μ I O ( x ) ( 1 π I O ( x ) ) α , then x B N D ( X ) .

5.4. Three-Way Decisions Model Based on IIMRIFS

Suppose A i I is the reduction set under IMRS. Like Section 5.1, the expected loss functions R I I ( ω | [ x ] A i I ) ( = P , B , N ) of an object x are as follows:
R I I ( ω P | [ x ] A i I ) = λ P P μ I I ( x ) + λ P N ν I I ( x ) + λ P B π I I ( x ) ;  
R I I ( ω N | [ x ] A i I ) = λ N P μ I I ( x ) + λ N N ν I I ( x ) + λ N B π I I ( x ) ;  
R I I ( ω B | [ x ] A i I ) = λ B P μ I I ( x ) + λ B N ν I I ( x ) + λ B B π I I ( x ) .  
where
μ I I ( x ) = μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) , ν I I ( x ) = ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) , π I I ( x ) = 1 μ i = 1 r R A i I ¯ I ( E ) ( x ) ν i = 1 r R A i I ¯ I ( E ) ( x ) ;
or
μ I I ( x ) = μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) , ν I I ( x ) = ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) , π I I ( x ) = 1 μ i = 1 r R A i I ¯ I ( E ) ( x ) ν i = 1 r R A i I ¯ I ( E ) ( x ) .
Therefore, the three-way decisions rules based on IIMRIFS are captured as follows:
(P4): If μ I I ( x ) ( 1 π I I ( x ) ) α , then x P O S ( X ) ;
(N4): If μ I I ( x ) ( 1 π I I ( x ) ) β , then x N E G ( X ) ;
(B4): If ( 1 π I I ( x ) ) β μ I I ( x ) and μ I I ( x ) ( 1 π I I ( x ) ) α , then x B N D ( X ) .
By constructing the above three decision models, the redundant objects in the reduction sets can be removed, which is beneficial to the optimal granular selection.

5.5. Comprehensive Measuring Methods of Granularity

Definition 17
([40]). Let an intuitionistic fuzzy number E ˜ ( f 1 ) = ( μ E ˜ ( f 1 ) , ν E ˜ ( f 1 ) ) , f 1 U , then the score function of E ˜ ( f 1 ) is calculated as:
S ( E ˜ ( f 1 ) ) = μ E ˜ ( f 1 ) ν E ˜ ( f 1 ) .  
The accuracy function of E ˜ ( f 1 ) is defined as:
H ( E ˜ ( f 1 ) ) = μ E ˜ ( f 1 ) + ν E ˜ ( f 1 ) .  
where 1 S ( E ˜ ( f 1 ) ) 1 and 0 H ( E ˜ ( f 1 ) ) 1 .
Definition 18.
Let D I S = ( U , C D ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C. Suppose E are IFS on the universe U = { x 1 , x 2 , , x n } , defined by μ A i ( x j ) and ν A i ( x j ) , where μ A i ( x j ) and ν A i ( x j ) are their membership and non-membership functions respectively. | [ x j ] A i | is the number of equivalence classes of xj on granularity Ai, U / D = { X 1 , X 2 , , X s } is the partition induced by the decision attributes D. Then, the comprehensive score function of granularity Ai is captured as:
C S F A i ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | .  
The comprehensive accuracy function of granularity Ai is captured as:
C A F A i ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | .  
where 1 C S F A i ( E ) 1 and 0 C A F A i ( E ) 1 .
With respect to Definition 19, according to references [27,39], we can deduce the following rules.
Definition 19.
Let two granularities A1, A2, then we have:
(1) 
If C S F A 1 ( E ) > C S F A 2 ( E ) , then A2 is smaller than A1, expressed as A 1 > A 2 ;
(2) 
If C S F A 1 ( E ) < C S F A 2 ( E ) , then A1 is smaller than A2, expressed as A 1 < A 2 ;
(3) 
If C S F A 1 ( E ) = C S F A 2 ( E ) , then
(i) 
If C S F A 1 ( E ) = C S F A 2 ( E ) , then A2 is equal to A1, expressed as A 1 = A 2 ;
(ii) 
If C S F A 1 ( E ) > C S F A 2 ( E ) , then A2 is smaller than A1, expressed as A 1 > A 2 ;
(iii) 
If C S F A 1 ( E ) < C S F A 2 ( E ) , then A1 is smaller than A2, expressed as A 1 < A 2 .

5.6. Optimal Granularity Selection Algorithm to Derive Three-Way Decisions from MRIFS

Suppose the reduction sets of optimistic and IMRS are A i O and A i I respectively. In this section, we take the reduction set under OMRS as an example to make the result A i O of optimal granularity selection.
Algorithm 2. Optimal granularity selection algorithm to derive three-way decisions from MRIFS
Input: D I S = ( U , C D , V , f ) , A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of condition attributes C, A i A , U / D = { X 1 , X 2 , , X s } , IFS E;
Output: Optimal granularity selection result A i O .
1: compute via Algorithm 1;
2: if | A i O | > 1
3:  for A i A i O
4:   compute μ i = 1 r R A i O ¯ Δ ( E ) ( x j ) , ν i = 1 r R A i O ¯ Δ ( E ) ( x j ) , μ i = 1 r R A i O ¯ Δ ( E ) ( x j ) and ν i = 1 r R A i O ¯ Δ ( E ) ( x j ) ;
5:   according (P1)-(B1) and (P2)-(B2), compute P O S ( X O Δ ¯ ) , N E G ( X O Δ ¯ ) , B N D ( X O Δ ¯ ) , P O S ( X O Δ ¯ ) , N E G ( X O Δ ¯ ) , B N D ( X O Δ ¯ ) ;
6:   if N E G ( X O Δ ¯ ) U or N E G ( X O Δ ¯ ) U
7:     compute U / A i O Δ ¯ ,   C S F A i O Δ ¯ ( E ) ,   C A F A i O Δ ¯ ( E ) or ( U / A i O Δ ¯ ) ,   ( C S F A i O Δ ¯ ( E ) ,   C A F A i O Δ ¯ ( E ) ;
8:     according to Definition 19 to get A i O ;
9:     return A i O = A i ;
10:  end
11:  else
12:    return NULL;
13:  end
14: end
15: end
16: else
17: return A i O = A i O ;
18: end

6. Example Analysis 3 (Continued with Example 2)

In Example 1, only site 1 can be ignored under optimistic and pessimistic multi-granulation conditions, so it can be determined that site 1 does not need to be evaluated, while sites 2 and 3 need to be further investigated under the environment of optimistic multi-granulation. At the same time, with respect to the environment of pessimistic multi-granulation, comprehensive considera- tion site 3 can ignore the assessment and sites 2, 4 and 5 need to be further investigated.
According to Example 1, we can get that the reduction set of OMRS is { A 2 , A 3 } , but in the case of IMRS, there are two reduction sets, which are contradictory. Therefore, two reduction sets should be reconsidered simultaneously, so the joint reduction set under IMRS is { A 2 , A 4 , A 5 } .
Where the corresponding granularity structures of sites 2, 3, 4 and 5 are divided as follows:
  • U / A 2 = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
  • U / A 3 = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 8 } , { x 7 , x 9 , x 10 } } ,
  • U / A 4 = { { x 1 , x 2 , x 3 , x 5 } , { x 4 } , { x 6 , x 7 , x 8 } , { x 9 , x 10 } } ,
  • U / A 5 = { { x 1 , x 3 , x 4 , x 6 } , { x 2 , x 7 } , { x 5 , x 8 } , { x 9 , x 10 } } .
According to reference [11], we can get:
α = 8 2 ( 8 2 ) + ( 2 0 ) = 0.75 ; β = 2 0 ( 2 0 ) + ( 6 2 ) = 0.33.
The optimal site selection process under optimistic and IMRS is as follows:
(1) Optimal site selection based on OOMRIFS
According to the Example 2, we can get the values of evaluation functions μ O O ¯ ( x j ) , ( 1 π O O ¯ ( x j ) ) α , ( 1 π O O ¯ ( x j ) ) β , μ O O ¯ ( x j ) , ( 1 π O O ¯ ( x j ) ) α and ( 1 π O O ¯ ( x j ) ) β of OOMRIFS, as shown in Table 4.
We can get decision results of the lower and upper approximations of OOMRIFS by three-way decisions of the Section 5.1, as follows:
P O S ( X O O ¯ ) = ϕ ,
N E G ( X O O ¯ ) = { x 1 , x 4 , x 7 , x 8 , x 9 } ,
B N D ( X O O ¯ ) = { x 2 , x 3 , x 5 , x 6 , x 10 } ;
P O S ( X O O ¯ ) = { x 6 , x 9 } ,
N E G ( X O O ¯ ) = { x 8 } ,
B N D ( X O O ¯ ) = { x 2 , x 3 , x 5 } .
In the light of three-way decisions rules based on OOMRIFS, after getting rid of the objects in the rejection domain, we choose to fuse the objects in the delay domain with those in the acceptance domain for the optimal granularity selection. Therefore, the new granularities A2, A3 are as follows:
U / A 2 O I ¯ = { { x 2 } , { x 3 , x 5 } , { x 6 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 2 , x 3 , x 5 } , { x 6 } , { x 10 } } ;
U / A 2 O I ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 9 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 7 , x 9 , x 10 } } .
Then, according to Definition 18, we can get:
C S F A 2 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | = 1 4 × j = 1 , n [ x j ] A 2 O O ¯ 10 | μ A 2 O O ¯ ( x j ) ν A 2 O O ¯ ( x j ) | | [ x j ] A 2 O O ¯ | = 1 4 × ( ( 0.49 0.38 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) 2 + ( 0.25 0.46 ) + ( 0.67 0.23 ) ) = 0.1125 ,
C S F A 3 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | = 1 3 × j = 1 , n [ x j ] A 3 O O ¯ 10 | μ A 3 O O ¯ ( x j ) ν A 3 O O ¯ ( x j ) | | [ x j ] A 3 O O ¯ | = 1 3 × ( ( 0.25 0.46 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) 3 + ( 0.81 0.14 ) ) = 0.1133 ;
Similarly, we have:
C S F A 2 O O ¯ ( E ) = 0.4 , C S F A 3 O O ¯ ( E ) = 0.3533.
From the above results, in OOMRIFS, we can see that we can’t get the selection result of sites 2 and 3 only according to the comprehensive score function of granularities A2 and A3. Therefore, we need to further calculate the comprehensive accuracies to get the results as follows:
C A F A 2 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | = 1 4 × j = 1 , n [ x j ] A 2 O O ¯ 10 | μ A 2 O O ¯ ( x j ) + ν A 2 O O ¯ ( x j ) | | [ x j ] A 2 O O ¯ | = 1 4 × ( ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) 2 + ( 0.25 + 0.46 ) + ( 0.67 + 0.23 ) ) = 0.8375 ,
C A F A 3 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | = 1 3 × j = 1 , n [ x j ] A 3 O O ¯ 10 | μ A 3 O O ¯ ( x j ) + ν A 3 O O ¯ ( x j ) | | [ x j ] A 3 O O ¯ | = 1 3 × ( ( 0.25 + 0.46 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) 3 + ( 0.81 + 0.14 ) ) = 0.8267 ;
Analogously, we have:
C A F A 2 O O ¯ ( E ) = 0.87 , C A F A 3 O O ¯ ( E ) = 0.86.
Through calculation above, we know that the comprehensive accuracy of the granularity A3 is higher, so the site 3 is selected as the selection result.
(2) Optimal site selection based on OIMRIFS
The same as (1), we can get the values of evaluation functions μ O I ¯ ( x j ) , ( 1 π O I ¯ ( x j ) ) α , ( 1 π O I ¯ ( x j ) ) β , μ O I ¯ ( x j ) , ( 1 π O I ¯ ( x j ) ) α and ( 1 π O I ¯ ( x j ) ) β of OIMRIFS listed in Table 5.
We can get decision results of the lower and upper approximations of OIMRIFS by three-way decisions in the Section 5.2, as follows:
P O S ( X O I ¯ ) = ϕ ,
N E G ( X O I ¯ ) = U ,
B N D ( X O I ¯ ) = ϕ ;
P O S ( X O I ¯ ) = { x 1 , x 4 , x 6 , x 7 , x 8 , x 9 , x 10 } ,
N E G ( X O I ¯ ) = ϕ ,
B N D ( X O I ¯ ) = { x 2 , x 3 , x 5 } .
Hence, in the upper approximations of OIMRIFS, the new granularities A2, A3 are as follows:
U / A 2 O I ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 8 } , { x 7 , x 9 , x 10 } } .
According to Definition 18, we can calculate that
C S F A 2 O I ¯ ( E ) = C S F A 3 O I ¯ ( E ) = 0 ;
C A F A 2 O I ¯ ( E ) = C A F A 3 O I ¯ ( E ) = 0 ;
C S F A 2 O I ¯ ( E ) = 0.6317 , C S F A 3 O I ¯ ( E ) = 0.6783 ;
C A F A 2 O I ¯ ( E ) = 0.885 , C A F A 3 O I ¯ ( E ) = 0.905.
In OIMRIFS, the comprehensive score and comprehensive accuracy of the granularity A3 are both higher than the granularity A2. So, we choose site 3 as the evaluation site.
In reality, we are more inclined to select the optimal granularity in the case of more stringent requirements. According to (1) and (2), we can find that the granularity A3 is a better choice when the requirements are stricter in four cases of OMRS. Therefore, we choose site 3 as the optimal evaluation site.
(3) Optimal site selection based on IOMRIFS
Similar to (1), we can obtain the values of evaluation functions μ I O ¯ ( x j ) , ( 1 π I O ¯ ( x j ) ) α , ( 1 π I O ¯ ( x j ) ) β , μ I O ¯ ( x j ) , ( 1 π I O ¯ ( x j ) ) α and ( 1 π I O ¯ ( x j ) ) β of IOMRIFS, as described in Table 6.
We can get decision results of the lower and upper approximations of IOMRIFS by three-way decisions in the Section 5.3, as follows:
P O S ( X I O ¯ ) = ϕ ,
N E G ( X I O ¯ ) = { x 7 , x 8 } ,
B N D ( X I O ¯ ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 9 , x 10 } ;
P O S ( X I O ¯ ) = { x 6 , x 9