Next Article in Journal
FE-RetinaNet: Small Target Detection with Parallel Multi-Scale Feature Enhancement
Previous Article in Journal
Density Functional Theory Studies and Molecular Docking on Xanthohumol, 8-Prenylnaringenin and Their Symmetric Substitute Diethanolamine Derivatives as Inhibitors for Colon Cancer-Related Proteins
Previous Article in Special Issue
Risk Assessment of Circuit Breakers Using Influence Diagrams with Interval Probabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Granulation Rough Set Approach to Interval-Valued Intuitionistic Fuzzy Ordered Information Systems

College of Artificial Intelligence, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(6), 949; https://doi.org/10.3390/sym13060949
Submission received: 23 March 2021 / Revised: 12 May 2021 / Accepted: 12 May 2021 / Published: 27 May 2021
(This article belongs to the Special Issue Uncertain Multi-Criteria Optimization Problems)

Abstract

:
As a further extension of the fuzzy set and the intuitive fuzzy set, the interval-valued intuitive fuzzy set (IIFS) is a more effective tool to deal with uncertain problems. However, the classical rough set is based on the equivalence relation, which do not apply to the IIFS. In this paper, we combine the IIFS with the ordered information system to obtain the interval-valued intuitive fuzzy ordered information system (IIFOIS). On this basis, three types of multiple granulation rough set models based on the dominance relation are established to effectively overcome the limitation mentioned above, which belongs to the interdisciplinary subject of information theory in mathematics and pattern recognition. First, for an IIFOIS, we put forward a multiple granulation rough set (MGRS) model from two completely symmetry positions, which are optimistic and pessimistic, respectively. Furthermore, we discuss the approximation representation and a few essential characteristics for the target concept, besides several significant rough measures about two kinds of MGRS symmetry models are discussed. Furthermore, a more general MGRS model named the generalized MGRS (GMGRS) model is proposed in an IIFOIS, and some important properties and rough measures are also investigated. Finally, the relationships and differences between the single granulation rough set and the three types of MGRS are discussed carefully by comparing the rough measures between them in an IIFOIS. In order to better utilize the theory to realistic problems, an actual case shows the methods of MGRS models in an IIFOIS is given in this paper.

1. Introduction

For decades, Pawlak [1] has presented the rough set conception, which has become one of the most popular ideas in artificial intelligence litelature. The theory completely subverts the conception of classical sets and has been a soft computing implement to deal with impreciseness, indeterminacy and vagueness in data processing. The theory has been widely used in data mining [2,3], conflict analysis [4,5,6], patter recognition [7,8] and so on. At present, grest progress has been made in the theoretical basis and applied research of rough set in China, many scholars have published corresponding monographs and hundreds of papers in this field [9,10].
Atanassov [11] proposed the concept of the intuitive fuzzy set [12] on the basis of fuzzy set in 1983, which is an extension of the fuzzy set [13,14]. The intuitive fuzzy set is compatible with information on membership and non-membership [15] and more comprehensive and practical than the fuzzy set in dealing with vagueness and uncertainty. The integration of intuitive fuzzy sets and rough sets pruduces another blending model for processing intuitionistic fuzzy data [16]. For example, Coker first discussed the relationship between intuitive fuzzy set theory and rough set theory. On the basis of the fuzzy rough set given by Nanda, Jena and Chakrabraty put forward different concepts about intuitionistic fuzzy rough set theory. Wu and Liu [17] investigated the intuitionistic fuzzy equivalence relation in intuitive fuzzy system and got upper approximation reduction model in intuitive fuzzy information system. Zhou et al. [18] designed the approximation method of intuitive fuzzy rough set, and further uplifted the algorithm efficiency about intuitive fuzzy rough set approximate representation method in [19]. Zhang [20] researched some properties and conclusions of upper and lower approximation of intuitive fuzzy overlap.
The objects of classical rough set theory is a complete information system [21] which can divide the dominance of discourse through binary indistinguishable relation-the equivalent relation [22], so it can only process discrete data. In reality, however, due to the complexity and uncertainty of the environment, attribute values of a number of objects appear in the form of intuitionistic fuzzy number, and there are advantages and disadvantages among attribute values. To solve this problem, some scholars proposed an extension of rough set model based on dominance relation to replace the equivalence relation, and applied the classcial rough set theory to the ordered information system to research. In recent years, several researchers have been committed to studying the qualities and arithmetics of rough sets based on dominance relations. For example, Xu and Zhang [23] proposed new lower and upper approximations and obtained several important proporties in generalized rough set induced by covering. Xu et al. [24] proposed concepts of knowledge granulation, knowledge entropy and knowledge uncertainty measure in ordered information systems, and investigated some important properties. For an ordered information system, Xu et al. [25] dealed with the problem of attribute reduction with the proof theory. After that, Xu et al. [26] combined the intuitionistic fuzzy set theory with the ordered information system to further expand the inttuitionistic fuzzy set theory.
An equivalence relation on the universe can be regarded as a granulation, from the perspective of granular calculation, divide the domain of discourse into equivalence classes [27]. Hence the classical rough set models can be regarded as based on a granulation that is a equivalence relation. In addition, we can also know that any attributes can induce a equivalence relation in an information system. When based on multiple granulations, we can have the following situations:
Case 1: There exits at least a granulation so that the element must belong to the target concept.
Case 2: There exits at least a granulation so that the element may belong to the target concept.
Case 3: There are some granulations such that the element surely belong to the target concept.
Case 4: There are some granulations such that the element possibly belong to the target concept.
Case 5: All granulations so that the element must belong to the target concept.
Case 6: All granulations so that the element may belong to the target concept.
For case 1, 2, 5 and 6, a multiple source rough set model was proposed by Khan et al. [28]. Qian et al. also made a preliminary exploration of the rough set model from this perspective, defined the upper and lower approximation operators of the optimistic and pessimistic multiple granulation rough set model which are symmetry concepts, gave the properties of these approximation operators and introduced the measure of uncertainty of the target concept. Xu et al. [26] generalized the multiple granulation rough set model to the ordered information system for Case 1, 2, 5 and 6, and established a rough set model based on the ordered information system.
Intervaling intuitionistic fuzzy set can deal with more complex and practical problems [29,30]. How to establish dominant relations on the interval value of attributes about objects has become a hot topic in research. Qian et al. [31] first defined the dominance relation by comparing the upper and lower boundaries of interval values, and used the size of the upper or lower interval to judge the advantage or disadvantage of interval values. Zeng et al. [32] used the radius and center of interval values to define the dominance relation to reduce attributes in the interval-valued ordered information system. Yu et al. considered the dominance relation of two interval values in the intersection by the distribution principle of probability, then upper and lower approximation sets of the interval-valued information system are constructed which are based on this dominance relation. Huang et al. [33] introduced a dominance relation in the framework of interval-valued intuitionistic fuzzy information systems to come up with the concept called a dominance-based interval-valued intuitionistic fuzzy information system, which is used to establish a dominance-based rough set model.
Intervaling the intuitionistic fuzzy set can deal with uncertainty and vagueness in more effectively. This paper draws on the definition of the size between intuitionistic fuzzy numbers in [34]. The interval-valued intuitionistic fuzzy ordered information system [35,36] is obtained by combining the interval-valued intuitionistic fuzzy set [37] with the ordedred information system and extands the single granulation rough set model based on the dominance relation in an IIFOIS to two types of multiple granulation rough set model. In addition, considering that the approximation represention conditions of the two MGRS models for the concept are either too loose or too strict, a generalized multiple granulation rough set model in an IIFOIS is proposed from the perspective of the lower approximation. The rest of the paper is organized as follows. Some basic concepts about the intuitionnistic fuzzy set, the interval-valued intuitionistic fuzzy ordered information system and the rough set theory in an IIFOIS in Section 2. In Section 3 and Section 4, for an IIFOIS, two types of MGRS symmetry models are obtained, respectively, where a target concept is approximated from different kings of views by the dominance class induced by multiple dominance relations. In addition, a number of important properties, the rough measure and the quality of approximation of two types of MGRS models are investigated in an IIFOIS. In Section 5, the generalized multiple granulation rough set model based on an IIFOIS is proposed and some important properties are discussed. In Section 6, relationships and differences about the rough set, the rough measure and the quality of approximation are discussed between the single granulation rough set and three types of MGRS models. Finally, the paper is conclued by a summary and outlook for further research in Section 7.

2. Preliminaries

In this section, we will introduce several basic concepts, including the interval-valued intuitionistic fuzzy set (IIFS) and related operations, the interval-valued intuitionistic fuzzy ordered information system (IIFOIS) and the rough set based on the system. More details can be seen in references.

2.1. The Interval-Valued Intuitionistic Fuzzy Set

Let U be the universe, an interval-valued intuitionistic fuzzy set A [ ] on U is
A [ ] = { < x , [ μ A ( x ) , μ A + ( x ) ] , [ ν A ( x ) , ν A + ( x ) ] > | x U } ,
where μ A ( x ) , μ A + ( x ) : U [ 0 , 1 ] and ν A ( x ) , ν A + ( x ) : U [ 0 , 1 ] satisfy 0 μ A ( x ) μ A + ( x ) 1 , 0 ν A ( x ) ν A + ( x ) 1 and 0 μ A + ( x ) + ν A + ( x ) 1 for any x U . [ μ A ( x ) , μ A + ( x ) ] , [ ν A ( x ) , ν A + ( x ) ] are called the membership and nonmembership degree interval of x relative to the interval-valued intuitionistic set A [ ] , respectively, where μ A ( x ) and ν A ( x ) are the lower bounds of the interval, μ A + ( x ) and ν A + ( x ) are the upper bounds of the interval. And I I F S ( U ) represents the class of all interval-valued intuitionistic fuzzy sets on U.
Suppose A [ ] , B [ ] I I F S ( U ) , x U . The operations related to the set A [ ] and B [ ] are as follows.
( 1 ) A [ ] B [ ] μ A ( x ) μ B ( x ) , μ A + ( x ) μ B + ( x ) , ν A ( x ) ν B ( x ) , ν A + ( x ) ν B + ( x ) .
( 2 ) A [ ] B [ ] = { < x , [ μ A ( x ) μ B ( x ) , μ A + ( x ) μ B + ( x ) ] , [ ν A ( x ) ν B ( x ) , ν A + ( x ) ν B + ( x ) ] > | x U } .
( 3 ) A [ ] B [ ] = { < x , [ μ A ( x ) μ B ( x ) , μ A + ( x ) μ B + ( x ) ] , [ ν A ( x ) ν B ( x ) , ν A + ( x ) ν B + ( x ) ] > | x U } .
( 4 ) A [ ] C = { < x , [ ν A ( x ) , ν A + ( x ) ] , [ μ A ( x ) , μ A + ( x ) ] > | x U } .
where and represent the operation of max and min, respectively.

2.2. The Interval-Value Intuitionistic Fuzzy Ordered Information System

The interval-valued intuitionistic fuzzy information system (IIFIS) can be recorded as I [ ] = ( U , A T , V , f ) , we can know that U = { x 1 , x 2 , , x n } represents the whole of objects under discussion, and the class of all subsets of U is denoted by P ( U ) . A T = { a 1 , a 2 , , a m } is the collection of all attributes. V = a A T V a , V a denotes that the value domain of objects with respect to the attribute a. f : U × A T V , there f ( x , a ) = < [ μ a ( x ) , μ a + ( x ) ] , [ ν a ( x ) , ν a + ( x ) ] > V a for any a A T , x U , where 0 μ a ( x ) μ a + ( x ) 1 , 0 ν a ( x ) ν a + ( x ) 1 and 0 μ a + ( x ) + ν a + ( x ) 1 .
If A T = C T D T , where C T = { c 1 , c 2 , , c p } is the collection of all condition attributes, and D T = { d 1 , d 2 , , d q } is the collection of all decision attributes, then I [ ] = ( U , C T d , V , f ) is called the interval-valued intuitionistic fuzzy decision information system(IIFDIS). In particular, according to the number of decision attributes, the IIFDIS can be divided into the interval-valued intuitionistic fuzzy single-decision information system ( | D T | = 1 ) and the interval-valued intuitionistic fuzzy multi-decision information system ( | D T | 1 ).
Let I [ ] = ( U , A T , V , f ) be an IIFIS, a A T . According to the domain of the attribute a, for any x i U , we can find an object x j from U such that
f ( x i , a ) [ ] f ( x j , a ) μ a ( x i ) μ a ( x j ) , μ a + ( x i ) μ a + ( x j ) a n d ν a ( x i ) ν a ( x j ) , ν a + ( x i ) ν a + ( x j ) ,
f ( x i , a ) [ ] f ( x j , a ) μ a ( x i ) μ a ( x j ) , μ a + ( x i ) μ a + ( x j ) a n d ν a ( x i ) ν a ( x j ) , ν a + ( x i ) ν a + ( x j ) .
Increasing and decreasing partial ordered relations can be obtained from “ [ ] ” and “ [ ] ”. In an IIFIS I [ ] = ( U , A T , V , f ) , the attribute will be the criterion if and only if the value of objects by the attriibute is partial ordered, so we can get the dominance relation by criterions. In this paper, we only consider the dominance relation by the increasing partial ordered relation. For x i , x j U , x i [ ] a x j f ( x i , a ) [ ] f ( x j , a ) indicates that x i is superior to x j with respect to the criterion a, it is also means that x i is at least as good as x j about a. For A A T , x i A x j means that x i a x j for every a A .
Let I [ ] = ( U , A T , V , f ) be an IIFIS, if all attributes are criterions, then the I [ ] is called an interval-value intuitionistic fuzzy ordered information system and recorded as I [ ] = ( U , A T , V , f ) . In an IIFOIS I [ ] = ( U , A T , V , f ) , A A T , the dominance relation R A [ ] is
R A [ ] = { ( x i , x j ) U × U | f ( x i , a ) f ( x j , a ) , a A } .
it is obvious that R A [ ] is reflective, transtive, but not symmetric, therefore R A [ ] is not an equivalence relation.
The dominance class about x i U for A by R A [ ] is
[ x i ] A [ ] = { x j U | ( x i , x j ) R A [ ] } .
The coverage of U about the attribute set A is
U / R A [ ] = { [ x i ] A [ ] | x i U } .
Proposition 1.
Suppose I [ ] = ( U , A T , V , f ) be an IIFOIS, A , B A T . Then we have the following results.
(1) If B A , then R A [ ] R B [ ] and [ x i ] A [ ] [ x i ] B [ ] , for any x i U .
(2) If x j [ x i ] A [ ] , then [ x j ] A [ ] [ x i ] A [ ] and [ x i ] A [ ] = { [ x j ] A [ ] | x j [ x i ] A [ ] } , for any x i , x j U .
(3) [ x j ] A [ ] = [ x i ] A [ ] if and only if μ A ( x i ) = μ A ( x j ) , μ A + ( x i ) = μ A + ( x j ) and ν A ( x i ) = ν A ( x j ) , ν A + ( x i ) = ν A + ( x j ) for a A .
Example 1.
Suppose Table 1 is an interval-value intuitionistic fuzzy ordered information system about the information of communities to be sold, U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } is a universe which consists of 6 communities in one city, A T = { a 1 , a 2 , a 3 , a 4 } is the conditional attributes of the system including location, utility service, type of layout and environment. Decision is the result of excellent student by experts according to the information of these communities, Y express that the community is excellent, and N express the community is not excellent.
From Table 2, we can know that U / d = { D Y , D N } , D Y = { x 1 , x 2 , x 6 } , D N = { x 3 , x 4 , x 5 } . We can calculate the dominance classes induced by R A T [ ] .
[ x 1 ] A T [ ] = { x 1 } , [ x 2 ] A T [ ] = { x 2 } ,
[ x 3 ] A T [ ] = { x 3 } , [ x 4 ] A T [ ] = { x 4 , x 6 } ,
[ x 5 ] A T [ ] = { x 5 } , [ x 6 ] A T [ ] = { x 6 } .
Let A = { a 1 , a 2 , a 4 } A T , then we have
[ x 1 ] A [ ] = { x 1 } , [ x 2 ] A [ ] = { x 2 } ,
[ x 3 ] A [ ] = { x 1 , x 2 , x 3 , x 5 } , [ x 4 ] A [ ] = { x 1 , x 2 , x 4 , x 5 , x 6 } ,
[ x 5 ] A [ ] = { x 5 } , [ x 6 ] A [ ] = { x 6 } .
Obviously, [ x i ] A T [ ] [ x i ] A [ ] , U / R A T [ ] = U / R A [ ] = { [ x i ] A [ ] | x i U }

2.3. The Rough Set in IIFOIS

Suppose I [ ] = ( U , A T , V , f ) be an IIFOIS, X U , A A T . The lower and upper approximation of X with repect to the dominance relation R A [ ] are as follows
X ̲ A [ ] = { x U | [ x ] A [ ] X } ,
X ¯ A [ ] = { x U | [ x ] A [ ] X } .
The objects in the lower approximation set X ̲ A [ ] certainly belong to the target set X, while the obiects in the upper approximation set X ¯ A [ ] may be part of the target set X. If X ̲ A [ ] = X ¯ A [ ] , we can say that X is a definable set with respect to the dominance relation R A [ ] , otherwise X is rough. And P o s ( X ) = X ̲ A [ ] , N e g ( X ) = X ̲ A [ ] and B n d ( X ) = X ¯ A [ ] X ̲ A [ ] are, respecticely, the positive region, negative region and boundary region of X.
Proposition 2.
Suppose I [ ] = ( U , A T , V , f ) be an IIFOIS, A A T . For any X U we have that
(1) X ̲ A [ ] X ̲ A T [ ] and X ¯ A [ ] X ¯ A T [ ] .
(2) X A [ ] = X A T [ ] if and only if X ̲ A [ ] = X ̲ A T [ ] and X ¯ A [ ] = X ¯ A T [ ] .
Proposition 3.
Suppose I [ ] = ( U , A T , V , f ) be an IIFOIS, X , Y U , A A T . Then we have the following results
(1L) X ̲ A [ ] X   (Contraction)
(1U) X X ¯ A [ ]   (Extention)
(2L) X ̲ A [ ] = X ¯ A [ ]   (Duality)
(2U) X ¯ A [ ] = X ̲ A [ ]   (Duality)
(3L) ̲ A [ ] =   (Normality)
(3U) ¯ A [ ] =   (Normality)
(4L) U ̲ A [ ] = U   (Co-normality)
(4U) U ¯ A [ ] = U   (Co-normality)
(5L) X Y ̲ A [ ] = X ̲ A [ ] Y ̲ A [ ]   (Multiplication)
(5U) X Y ¯ A [ ] = X ¯ A [ ] Y ¯ A [ ]   (Addition)
(6L) X Y ̲ A [ ] X ̲ A [ ] Y ̲ A [ ]   (F-addition)
(6U) X Y ¯ A [ ] X ¯ A [ ] Y ¯ A [ ]   (F-multiplication)
(7L) X Y X ̲ A [ ] Y ̲ A [ ]   (Monotonicity)
(7U) X Y X ¯ A [ ] Y ¯ A [ ]   (Monotonicity)
(8L) ( X ̲ A [ ] ̲ ) A [ ] = X ̲ A [ ]   (Idempotency)
(8U) ( X ¯ A [ ] ¯ ) A [ ] = X ¯ A [ ]   (Idempotency)
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A A T , X U . To express the imprecision and roughness of a rough set, the accuracy measure and the rough measure of X by the dominance relation R A [ ] are as follows,
α ( R A [ ] , X ) = | X ̲ A [ ] | | X ¯ A [ ] | = | X ̲ A [ ] | | U | | X ¯ A [ ] | ,
ρ ( R A [ ] , X ) = 1 α ( R A [ ] , X ) .
Proposition 4.
Suppose I [ ] = ( U , A T , V , f ) be an IIFOIS, X U , A A T . We have the following results.
(1) 0 ρ ( R A [ ] , X ) 1 .
(2) ρ ( R A [ ] , X ) = 1 | X ̲ A [ ] | | X ¯ A [ ] | = 1 | X ̲ A [ ] | | U | | X ¯ A [ ] | .
(3) If R A [ ] = R A T [ ] , then ρ ( R A [ ] , X ) = ρ ( R A T [ ] , X ) .
(4) If B A A T , then ρ ( R A T [ ] , X ) ρ ( R A [ ] , X ) ρ ( R B [ ] , X ) .
Let I [ ] = ( U , C T { d } , V , f ) be an IIFDOIS, A A T . The quality of approximation of d by R A [ ] , also called the degree of dependency, is defined as
γ ( R A [ ] , d ) = 1 | U | j = 1 k ( | R ̲ A [ ] ( D j ) | ) ,
where R d [ ] = { ( x i , x j ) U × U | f ( x i , d ) = f ( x j , d ) , U / d = { D 1 , D 2 , , D k } .
Proposition 5.
Suppose I [ ] = ( U , A T , V , f ) be an IIFDOIS, X U , A A T . We have
(1) 0 γ ( R A [ ] , d ) 1 .
(2) If R A [ ] = R A T [ ] , then γ ( R A [ ] , d ) = γ ( R A T [ ] , d ) .
(3) If B A A T , then γ ( R A T [ ] , d ) γ ( R A [ ] , d ) γ ( R B [ ] , d ) .
Example 2
(Continued from Example 1). Let X = D Y = { x 1 , x 2 , x 6 } . Then the lower and upper approximation sets of the concept X by the dominance relation R A T [ ] are as follows.
X ̲ A T [ ] = { x 1 , x 2 , x 6 } ,
X ¯ A T [ ] = { x 1 , x 2 , x 4 , x 6 } .
Similarly, the lower and upper approximation sets of the concept X by the dominance relation R A [ ] are as follows.
X ̲ A [ ] = { x 1 , x 2 , x 6 } ,
X ¯ A [ ] = { x 1 , x 2 , x 3 , x 4 , x 6 } .
Consider rough measures and we can get that
ρ ( R A T [ ] , X ) = 1 | R ̲ A T [ ] ( X ) | | R ¯ A T [ ] ( X ) | = 1 4 , ρ ( R A [ ] , X ) = 1 | R ̲ A [ ] ( X ) | | R ¯ A [ ] ( X ) | = 2 5 .
And
D Y ̲ A T [ ] = { x 1 , x 2 , x 6 } , D N ̲ A T [ ] = { x 3 , x 5 } .
D Y ̲ A [ ] = { x 1 , x 2 , x 6 } , D N ̲ A [ ] = { x 5 } .
Then
γ ( R A T [ ] , d ) = | D Y ̲ A T [ ] | + | D N ̲ A T [ ] | | U | = 5 6 , γ ( R A [ ] , d ) = | D Y ̲ A [ ] | + | D N ̲ A [ ] ) | | U | = 2 3 .
In that way, we can know that ρ ( R A T [ ] , X ) ρ ( R A [ ] , X ) , γ ( R A T [ ] , d ) γ [ ] ( R A [ ] , d ) .

3. The Optimistic Multiple Granulation Rough Set in IIFOIS

The above rough set is approximated to the target concept by constructing the upper and lower approximations through a single dominance relation, so it is a single granulation rough set model. However, the granular computing emphasizes observing and solving problems under different granulations which refers to the concept of the multile granulation rough set.
In this section, we will consider the multiple granulation rough set in an IIFOIS from a completely optimistic perspective which means that as long as just one condition is met to accept. We will investigate the representation of the upper and lower approximation operators and discuss two basic mesasures and their properties.
Definition 1.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( s 2 | A T | ), R A 1 [ ] , R A 2 [ ] , , R A s [ ] be dominance relations, respectively. For any X P ( U ) , we have the operators O M ̲ i = 1 s A i [ ] and O M ¯ i = 1 s A i [ ] : P ( U ) P ( U ) , are as follows:
O M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( [ x ] A i [ ] X ) } ,
O M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( [ x ] A i [ ] X ) } .
where “” means “or” and “” means “and”. O M ̲ i = 1 s A i [ ] ( X ) and O M ¯ i = 1 s A i [ ] ( X ) are called the optimistic multiple granulation lower and upper approximation of X by dominance relations R A 1 [ ] , R A 2 [ ] , , R A s [ ] in an IIFOIS.
Similarily, O M ̲ i = 1 s A i [ ] ( X ) = O M ¯ i = 1 s A i [ ] ( X ) , then X is an opetimistic definable set with respect to multiple granulation dominance relations A 1 , A 2 , , A s ( m 2 | A T | ), otherwise X is an opetimistic rough set. P o s i = 1 s A i [ ] ( X ) = O M ̲ i = 1 s A i [ ] ( X ) , N e g i = 1 s A i [ ] ( X ) = O M ¯ i = 1 s A i [ ] ( X ) and B n d i = 1 s A i [ ] ( X ) = O M ¯ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( X ) .
From the above definition, it can be seen that the lower approximation in the optimistic multiple granulation rough set is defined by multiple dominance relations, whereas the rough lower approximation in Section 2.3 is represented via those derived by only one dominance relation. And the operation in the lower approximation is “⋁”. It means that for the object x, as long as the lower approximation condition is met by at least one dominance raltion, it is placed in the lower approximation set. That is what “optimistic” means.
In the following, we will employ an example to illustrate the above concepts.
Example 3
(Continued from Example 1). In Table 1, we often face the phenomenon that some consumers may prefer some conditions of excellent communities as follows:
Preference 1: Not only the location and the utility service are better, but also the type of layout is better.
Preference 2: Not only the location and the type of layout are better, but also the environment is better.
From Table 1, we can know that U / d = { D Y , D N } , D Y = { x 1 , x 2 , x 6 } , D N = { x 3 , x 4 , x 5 } . Let X = D Y = { x 1 , x 2 , x 6 } is a set which consists of excellent communities.
When we only consider one of two preferences, which one must be an excellent community and which one may be an excellent community?
By Preference 1 and 2, we can get that two dominance relations:
R 1 = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 1 1 , R 2 = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 .
When we consider only Preference 1, we can get that
R ̲ 1 [ ] ( X ) = { x 1 , x 2 , x 6 } ,
R ¯ 1 [ ] ( X ) = { x 1 , x 2 , x 4 , x 5 , x 6 } ,
and when we consider only Preference 2, we can get that
R ̲ 2 [ ] ( X ) = { x 2 , x 6 } ,
R ¯ 2 [ ] ( X ) = { x 1 , x 2 , x 4 , x 6 } .
It means that x 1 , x 2 , x 6 must be excellent communities and x 1 , x 2 , x 4 , x 5 , x 6 may be excellent communities when we consider only Preference 1. And when we consider only Preference 2, it is also easy to find out that x 2 , x 6 must be excellent communities and x 1 , x 2 , x 4 , x 6 may be excellent communities.
Now we consider the question: When we consider one of two preferences at least, which one must be an excellent community? When we consider both of two preferences, which one may be an excellent community? We can solve the question according to the definition of the opetimistic multiple granulation rough set. Then we have
O M ̲ 1 + 2 [ ] ( X ) = { x 1 , x 2 , x 6 } ,
O M ¯ 1 + 2 [ ] ( X ) = { x 1 , x 2 , x 4 , x 6 } .
We can know that the community x 1 , x 2 , x 6 must be excellent if we consider one of two preferences at least, and the community x 1 , x 2 , x 4 , x 6 may be excellent if we consider both of two preferences. Moreover, we can obtain
O M ̲ 1 + 2 [ ] ( X ) = R ̲ 1 [ ] ( X ) R ̲ 2 [ ] ( X ) ,
O M ¯ 1 + 2 [ ] ( X ) = R ¯ 1 [ ] ( X ) R ¯ 2 [ ] ( X ) .
Proposition 6.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( m 2 | A T | ), X U , then the following results hold.
(OL1) O M ̲ i = 1 s A i [ ] ( X ) X   (Contraction)
(OU1) X O M ¯ i = 1 s A i [ ] ( X )   (Extention)
(OL2) O M ̲ i = 1 s A i [ ] ( X ) = O M ¯ i = 1 s A i [ ] ( X )   (Duality)
(OU2) O M ¯ i = 1 s A i [ ] ( X ) = O M ̲ i = 1 s A i [ ] ( X )   (Duality)
(OL3) O M ̲ i = 1 s A i [ ] ( ) =   (Normality)
(OU3) O M ¯ i = 1 s A i [ ] ( ) =   (Normality)
(OL4) O M ̲ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(OU4) O M ¯ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(OL5) O M ̲ i = 1 s A i [ ] ( X Y ) O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (L-F-multiplication)
(OU5) O M ¯ i = 1 s A i [ ] ( X Y ) O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (U-F-addition)
(OL6) X Y O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (Monotonicity)
(OU6) X Y O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (Monotonicity)
(OL7) O M ̲ i = 1 s A i [ ] ( X Y ) O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (L-F-addition)
(OU7) O M ¯ i = 1 s A i = l [ ] ( X Y ) O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (U-F-multiplication)
The proof can be found in Proposition A1 of Appendix A.
Definition 2.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U , the opetimistic multiple granulation rough measure of X by i = 1 s A i is
ρ o [ ] ( i = 1 s A i , X ) = 1 | O M ̲ i = 1 s A i [ ] ( X ) | | O M ¯ i = 1 s A i [ ] ( X ) | .
Definition 3.
Let I [ ] = ( U , C T d , V , f ) be an IIFDOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). The quality of approximation of d by i = 1 s A i , also called the opetimistic multiple granulation degree of dependency, is defined as
γ o [ ] ( i = 1 s A i , d ) = 1 | U | j = 1 k ( | O M ̲ i = 1 s A i [ ] ( D j ) | ) ,
where R d [ ] = { ( x i , x j ) U × U | f ( x i , d ) = f ( x j , d ) , U / d = { D 1 , D 2 , , D k } .
Example 4
(Continued from Example 3). We can obtain the optemistic multiple rough measure of X by the Preferences 1 and 2, as follows
ρ o [ ] ( 1 + 2 , X ) = 1 | O M ̲ 1 + 2 [ ] ( X ) | | O M ¯ 1 + 2 [ ] ( X ) | = 1 4 .
And we have
O M ̲ 1 + 2 [ ] ( D Y ) = { x 1 , x 2 , x 6 } , O M ̲ 1 + 2 [ ] ( D N ) = { x 3 , x 5 } .
Then the opetimistic multiple granulation degree of dependency is
γ o [ ] ( 1 + 2 , d ) = | O M ̲ 1 + 2 [ ] ( D Y ) | + | O M ̲ 1 + 2 [ ] ( D N ) | | U | = 5 6 .
This shows that the degree of uncertainty is 1 4 by the Preferences 1 and 2 from the optimistic perspective. And the degree of dependence of the attributes including the Preferences 1 and 2 on decision making is 5 6 from the optimistic perspective.

4. The Pessimistic Multiple Granulation Rough Set in IIFOIS

In this section, we will introduce another the multiple granulation rough set from a completely optimistic perspective which means that accepting only if all conditions are met and some related properties in an IIFOIS. Similarily, two elementary mesasures and their properties are also provided.
Definition 4.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( s 2 | A T | ), R A 1 [ ] , R A 2 [ ] , , R A s [ ] be dominance relations, respectively. For any X P ( U ) , we have the operators P M ̲ i = 1 s A i [ ] and P M ¯ i = 1 s A i [ ] : P ( U ) P ( U ) , are as follows
P M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( [ x ] A i [ ] X ) } ,
P M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( [ x ] A i [ ] X ) } ,
where “” means “or”and “” means “and”. P M ̲ i = 1 s A i [ ] ( X ) and P M ¯ i = 1 s A i [ ] ( X ) are called the pessimistic multiple granulation lower and upper approximation of X.
Similarily, P M ̲ i = 1 s A i [ ] ( X ) = P M ¯ i = 1 s A i [ ] ( X ) , then X is a pessimistic definable set with respect to multiple granulation dominance relations A 1 , A 2 , , A s ( m 2 | A T | ), otherwise X is a pessimistic rough set. P o s i = 1 s A i [ ] ( X ) = P M ̲ i = 1 s A i [ ] ( X ) , N e g i = 1 s A i [ ] ( X ) = P M ¯ i = 1 s A i [ ] ( X ) and B n d i = 1 s A i [ ] ( X ) = P M ¯ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( X ) .
From the above definition, it can be seen that the lower approximation in the pessimistic multiple granulation rough set is defined by multiple dominance relations, whereas the rough lower approximation in Section 2.3 is represented via those derived by only one dominance relation. And the operation in the lower approximation is “⋀”. It means that for the object x, the lower approximation condition must be met through all dominance raltions before it can be placed in the lower approximation set. That is what “pessimistic” means.
We will illustrate the above concepts through the following example.
Example 5
(Continued from Example 3). Now we consider another question: When we consider both of two preferences, which one must be an excellent community? When we consider one of two preferences at least, which one may be an excellent community? We can solve the question according to the definition of the pessimistic multiple granulation rough set. Then we have
P M ̲ 1 + 2 [ ] ( X ) = { x 2 , x 6 } ,
P M ¯ 1 + 2 [ ] ( X ) = { x 1 , x 2 , x 4 , x 5 , x 6 } .
We can know that the community x 2 , x 6 must be excellent if we consider both of two preferences, and the community x 1 , x 2 , x 4 , x 5 , x 6 may be excellent if we consider both of two preferences. Moreover, we can obtain
P M ̲ 1 + 2 [ ] ( X ) = R ̲ 1 [ ] ( X ) R ̲ 2 [ ] ( X ) ,
P M ¯ 1 + 2 [ ] ( X ) = R ¯ 1 [ ] ( X ) R ¯ 2 [ ] ( X ) .
Proposition 7.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( m 2 | A T | ), X , Y U , then the following results hold.
(PL1) P M ̲ i = 1 s A i [ ] ( X ) X   (Contraction)
(PU1) X P M ¯ i = 1 s A i [ ] ( X )   (Extention)
(PL2) P M ̲ i = 1 s A i [ ] ( X ) = P M ¯ i = 1 s A i [ ] ( X )   (Duality)
(PU2) P M ¯ i = 1 s A i [ ] ( X ) = P M ̲ i = 1 s A i [ ] ( X )   (Duality)
(PL3) P M ̲ i = 1 s A i [ ] ( ) =   (Normality)
(PU3) P M ¯ i = 1 s A i [ ] ( ) =   (Normality)
(PL4) P M ̲ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(PU4) P M ¯ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(PL5) P M ̲ i = 1 s A i [ ] ( X Y ) = P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )   (L-F-multiplication)
(PU5) P M ¯ i = 1 s A i [ ] ( X Y ) = P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (U-F-addition)
(PL6) X Y P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )   (Monotonicity)
(PU6) X Y P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (Monotonicity)
(PL7) P M ̲ i = 1 s A i [ ] ( X Y ) P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )  (L-F-addition)
(PU7) P M ¯ i = 1 s A i [ ] ( X Y ) P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (U-F-multiplication)
The proof can be found in Proposition A2 of Appendix A.
Definition 5.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U , the pessimistic multiple granulation rough measure of X by i = 1 s A i is
ρ p [ ] ( i = 1 s A i , X ) = 1 | P M ̲ i = 1 s A i [ ] ( X ) | | P M ¯ i = 1 s A i [ ] ( X ) | .
Definition 6.
Let I [ ] = ( U , C T d , V , f ) be an IIFDOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), the quality of approximation of d by i = 1 s A i , also called the pessimistic degree of dependency. It is defined as
γ p [ ] ( i = 1 s A i , d ) = 1 | U | j = 1 k ( | P M ̲ i = 1 s A i [ ] ( D j ) | ) ,
where R d [ ] = { ( x i , x j ) U × U | f ( x i , d ) = f ( x j , d ) , U / d = { D 1 , D 2 , , D k } .
Example 6
(Continued from Example 3). We can obtain the pessimistic multiple rough measure of X by the Preference 1 and Preference 2, as follows
ρ p [ ] ( 1 + 2 , X ) = 1 | P M ̲ 1 + 2 [ ] ( X ) | | P M ¯ 1 + 2 [ ] ( X ) | = 3 5 .
And we have
P M ̲ 1 + 2 [ ] ( D Y ) = { x 2 , x 6 } , P M ̲ 1 + 2 [ ] ( D N ) = { x 3 } .
Then the pessimistic multiple granulation degree of dependency is
γ p [ ] ( 1 + 2 , d ) = | P M ̲ 1 + 2 [ ] ( D Y ) | + | P M ̲ 1 + 2 [ ] ( D N ) | | U | = 1 2 .
This shows that the degree of uncertainty is 3 5 by the Preference 1 and Preference 2 from the pessimistic perspective. And the degree of dependence of the attributes including the Preference 1 and Preference 2 on decision making is 1 2 from the pessimistic perspective.

5. The Generalized Multiple Granulation Rough Set in the IIFOIS

In the OMGRS theory and the PMGRS theory, the conditions for approximate description of the target concept are either too loose or too strict to consider the rule of majority. In this section, we will generalize the OMGRS and the PMGRS to the generalized multiple granulation rough set(GMGRS) in an IIFOIS. From the perspective of the lower approximation, the concept of support feasure function will be given. The upper and lower approximation operators and related properties about the GMGRS will be discussed. In addition, two improtaant rough measures of GMGRS are also provided.
Definition 7.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). For any X P ( U ) , x U ,
S X A i ( x ) = 1 , [ x ] A i [ ] X , 0 , o t h e r w i s e .
S X A i ( x ) is called the support feasure function of the object x U about the concept X in the dominance relation A i .
By Definition 7, we can know that S X A i ( x ) expresses whether x accurately supports the concept X or x has a positive description of X with respect to the cover of A i to U.
Proposition 8.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X , Y P ( U ) , x U . The following properties about the support feature function S X A i ( x ) are established.
(1) 
S X A i ( x ) = 1 , [ x ] A i [ ] X = , 0 , [ x ] A i [ ] X .
(2) 
S A i ( x ) = 0 , S U A i ( x ) = 1 .
(3) 
S X Y A i ( x ) S X A i ( x ) S Y A i ( x ) .
(4) 
S X Y A i ( x ) = S X A i ( x ) S Y A i ( x ) .
(5) 
X Y S X A i ( x ) S Y A i ( x ) .
(6) 
X Y S X A i ( x ) S Y A i ( x ) .
The proof can be found in Proposition A3 of Appendix A.
” and “” represent the operation of taking small and taking big, respectively.
Definition 8.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] . For any X P ( U ) , S X A i ( x ) is the support feature function of x, then the lower and upper approximations of X by i = 1 s S X A i are as follows
G M ̲ i = 1 s A i [ ] ( X ) β = { x U | i = 1 s S X A i ( x ) s β } ,
G M ¯ i = 1 s A i [ ] ( X ) β = { x U | i = 1 s ( 1 S X A i ( x ) ) s > 1 β } .
X is called a definable set with respect to i = 1 s A i if and only if G M ̲ i = 1 s A i [ ] ( X ) β = O M ¯ i = 1 s A i [ ] ( X ) β ; otherwise X is a rough set. The model is the generalized multiple granulation rough set(GMGRS) model, and β is callede information level of i = 1 s A i .
Different from the optimistic and pessimistic multiple granulation rough sets, the lower approximation in the generalized multiple granulation rough set is defined by the proportion of dominance relations that meet the lower approximation condition. In fact, the GMGRS will degenerated into the OMGRS and PMGRS only when β = 1 s and β = 1 , respectively.
Proposition 9.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] . For any X , Y P ( U ) , the following results hold.
(1L) G M ̲ i = 1 s A i [ ] ( X ) β = G M ¯ i = 1 s A i [ ] ( X ) β   (Duality)
(1U) G M ¯ i = 1 s A i [ ] ( X ) β = G M ̲ i = 1 s A i [ ] ( X ) β   (Duality)
(2L) G M ̲ i = 1 s A i [ ] ( X ) β X   (Contraction)
(2U) X G M ¯ i = 1 s A i [ ] ( X ) β   (Extention)
(3L) G M ̲ i = 1 s A i [ ] ( ) β =   (Normality)
(3U) G M ¯ i = 1 s A i [ ] ( ) β =   (Normality)
(4L) G M ̲ i = 1 s A i [ ] ( U ) β = U   (Co-normality)
(4U) G M ¯ i = 1 s A i [ ] ( U ) β = U   (Co-normality)
(5L) X Y G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (Monotonicity)
(5U) X Y G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (Monotonicity)
(6L) G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (L-F-multiplication)
(6U) G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (L-F-addition)
(7L) G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (L-F-addition)
(7U) G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (L-F-multiplication)
The proof can be found in Proposition A4 of Appendix A.
Proposition 10.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). For any α ( 0.5 , 1 ] , β ( 0.5 , 1 ] and α β , t s , X P ( U ) , then the following properties hold.
(1) G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( X ) α .
(2) G M ¯ i = 1 s A i [ ] ( X ) α G M ¯ i = 1 s A i [ ] ( X ) β .
(3) G M ̲ i = 1 t A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( X ) β .
(4) G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 t A i [ ] ( X ) β .
The proof can be found in Proposition A5 of Appendix A.
Example 7
(Continued from Example 3). However, some consumers only prefer one of the four community attributes.
Preference 1: Only the location is better.
Preference 2: Only the unility service is better.
Preference 3: Only the type of layout is better.
Preference 4: Only the environment is better.
By Preference 1, 2, 3, 4, we can get that four dominance relations.
R 1 = 1 0 1 1 0 0 1 1 1 1 1 0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 1 1 1 0 1 1 1 1 1 1 , R 2 = 1 1 1 1 1 1 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 ,
R 3 = 1 1 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 1 1 0 0 1 0 1 1 1 1 1 1 , R 4 = 1 1 1 1 0 1 0 1 1 1 0 1 0 0 1 1 0 1 0 0 0 1 0 0 1 1 1 1 1 1 0 0 0 1 0 1 .
When we consider at least three of the four preferences, which one must be an excellent community? When we consider at least one of the four preferences, which one may be an excellent community? Unlike the OMGRS and the PMGRS, we can deal with this situation through the GMGRS. The support feature function of objects are in Table 2.
Let β = 0.75 , then we have
G M ̲ i = 1 4 A i [ ] ( X ) β = { x 6 } ,
G M ¯ i = 1 4 A i [ ] ( X ) β = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } .
We can know that the community x 6 must be excellent when we consider at least three of the four preferences, and the community x 1 , x 2 , x 3 , x 4 , x 5 , x 6 may be excellent when we consider at least one of the four preferences.
Definition 9.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] . For any X U . The generalized multiple granulation rough measure of X by i = 1 s A i is
ρ g [ ] ( i = 1 s A i , X ) = 1 | G M ̲ i = 1 s A i [ ] ( X ) β | | G M ¯ i = 1 s A i [ ] ( X ) β | .
Definition 10.
Let I [ ] = ( U , C T d , V , f ) be an IIFDOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] . The quality of approximation of d by i = 1 s A i , also called the generalized degree of dependency, is defined as
γ g [ ] ( i = 1 s A i , d ) = 1 | U | j = 1 k ( | G M ̲ i = 1 s A i [ ] ( D j ) β ) | ) ,
where R d [ ] = { ( x i , x j ) U × U | f ( x i , d ) = f ( x j , d ) , U / d = { D 1 , D 2 , , D k } .
Example 8
(Continued from Example 7). The generalized multiple granulation rough measure of X by i = 1 4 A i is
ρ g [ ] ( i = 1 4 A i , X ) = 1 | G M ̲ i = 1 4 A i [ ] ( X ) β | | G M ¯ i = 1 4 A i [ ] ( X ) β | = 5 6 ,
and
G M ̲ i = 1 4 A i [ ] ( D Y ) β = { x 6 } , G M ¯ i = 1 4 A i [ ] ( D N ) β = ,
so
γ g [ ] ( i = 1 4 A i , d ) = | G M ̲ i = 1 4 A i [ ] ( D Y ) β ) | + | G M ̲ i = 1 4 A i [ ] ( D N ) β ) | | U | = 1 6 .
This shows that the degree of uncertainty is 5 6 by the Preferences 1 and 2 considering an intermediate situation between the optimistic and the pessimistic. And the degree of dependence of the attributes including the Preferences 1 and 2 on decision making is 1 6 considering an intermediate situation between optimistic and pessimistic.

6. Differences and Relationships among the Dominance Relation Rough Set, the OMGRS and the PMGRS in an IIFOIS

We have known the definitions and properties of the multiple granulation rough set in an IIFOIS by the above sections. In this section, we will investigate differences and relationships among the dominance relation rough set, the OMGRS, the PMGRS and the GMGRS in an IIFOIS.
Proposition 11.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). X P ( U ) . Then the following properties hold.
(1) O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(2) O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
The proof can be found in Proposition A6 of Appendix A.
Proposition 12.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U . Then the following properties hold.
(1) O M ̲ i = 1 s A i [ ] ( X ) = i = 1 s R ̲ A i [ ] ( X ) .
(2) O M ¯ i = 1 s A i [ ] ( X ) = i = 1 s R ¯ A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) = i = 1 s R ̲ A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) = i = 1 s R ¯ A i [ ] ( X ) .
The proof can be found in Proposition A7 of Appendix A.
Proposition 13.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U , Y U . Then we have
(1) O M ̲ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ̲ A i [ ] ( X ) R ̲ A i [ ] ( Y ) ) .
(2) O M ¯ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ¯ A i [ ] ( X ) R ¯ A i [ ] ( Y ) ) .
(3) P M ̲ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ̲ A i [ ] ( X ) R ̲ A i [ ] ( Y ) ) .
(4) P M ¯ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ¯ A i [ ] ( X ) R ¯ A i [ ] ( Y ) ) .
The proof can be found in Proposition A8 of Appendix A.
Proposition 14.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X P ( U ) , the lower and upper approximations of the OMGRS and the PMGRS by the support festure function are
(1) O M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s S X A i ( x ) s > 0 } , O M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( 1 S X A i ( x ) ) s 1 } .
(2) P M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s S X A i ( x ) s 1 } , P M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( 1 S X A i ( x ) ) s > 0 } .
The proof can be found in Proposition A9 of Appendix A.
Proposition 15.
Let I [ ] ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] , X U . Then we have
(1) P M ̲ i = 1 s A i [ ] ( X ) G M ̲ i = 1 s A i [ ] ( X ) β O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(2) P M ¯ i = 1 s A i [ ] ( X ) G M ̲ i = 1 s A i [ ] ( X ) β O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) R ̲ A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) R ¯ A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
The proof can be found in Proposition A10 of Appendix A.
Example 9
(Continued from Example 3). By computing, we can obtain that approximations of the target set X by P r e f e r e n c e 1 P r e f e r e n c e 2 .
R ̲ 1 2 [ ] ( X ) = { x 1 , x 2 , x 6 } ,
R ¯ 1 2 [ ] ( X ) = { x 1 , x 2 , x 4 , x 6 } ,
then
P M ̲ 1 + 2 [ ] ( X ) O M ̲ 1 + 2 [ ] ( X ) R ̲ 1 2 [ ] ( X ) X R ¯ 1 2 [ ] ( X ) O M ¯ 1 + 2 [ ] ( X ) P M ¯ 1 + 2 [ ] ( X ) .
Proposition 16.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U . Then
(1) ρ [ ] ( R A i [ ] , X ) ρ o [ ] ( i = 1 s A i , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
(2) ρ p [ ] ( i = 1 s A i , X ) ρ [ ] ( R A i [ ] , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
(3) ρ p [ ] ( i = 1 s A i , X ) ρ [ ] ( R A i [ ] , X ) ρ o [ ] ( i = 1 s A i , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
The proof can be found in Proposition A11 of Appendix A.
Example 10
(Continued from Example 3). Computing the opetimistic rough measure of the target set X = D Y = { x 1 , x 2 , x 6 } according to the results in Example 3, it follows that
ρ [ ] ( 1 , X ) = 1 | R ̲ 1 [ ] ( X ) | | R ¯ 1 [ ] ( X ) | = 2 5 , ρ [ ] ( 2 , X ) = 1 | R ̲ 2 [ ] ( X ) | | R ¯ 2 [ ] ( X ) | = 1 2 ,
ρ [ ] ( 1 2 , X ) = 1 | R ̲ 1 2 [ ] ( X ) | | R ¯ 1 2 [ ] ( X ) | = 1 4 , ρ o [ ] ( 1 + 2 , X ) = 1 | O M ̲ 1 + 2 [ ] ( X ) | | O M ¯ 1 + 2 [ ] ( X ) | = 1 4 , ρ p [ ] ( 1 + 2 , X ) = 1 | P M ̲ 1 + 2 [ ] ( X ) | | P M ¯ 1 + 2 [ ] ( X ) | = 3 5 ,
then
ρ p [ ] ( 1 + 2 , X ) ρ [ ] ( 1 , X ) ρ o [ ] ( 1 + 2 , X ) ρ [ ] ( 1 2 , X ) .
Proposition 17.
Let I [ ] = ( U , C T d , V , f ) be an IIFDOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). Then
(1) γ [ ] ( R A i [ ] , d ) γ o [ ] ( i = 1 s A i , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
(2) γ p [ ] ( i = 1 s A i , d ) γ [ ] ( R A i [ ] g e , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
(3) γ p [ ] ( i = 1 s A i , d ) γ [ ] g e ( R A i [ ] , d ) γ o [ ] ( i = 1 s A i , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
The proof can be found in Proposition A12 of Appendix A.
Example 11
(Continued from Example 3). Computing the degree of dedpendence by the single granulation and multipe granulations. From Table 2, U / d = { D Y , D N } , D Y = { x 1 , x 2 , x 6 } , D N = { x 3 , x 4 , x 5 } .
R 1 [ ] ( D Y ) = { x 1 , x 2 , x 6 } , R 2 [ ] ( D Y ) = { x 2 , x 6 } , R 1 2 [ ] ( D Y ) = { x 1 , x 2 , x 6 } ,
R 1 [ ] ( D N ) = { x 3 } , R 2 [ ] ( D N ) = { x 3 , x 5 } , R 1 2 [ ] ( D N ) = { x 3 , x 5 } ,
then we have
γ [ ] ( 1 , d ) = 1 | U | ( | R ̲ 1 [ ] ( D Y ) | + | R ̲ 1 [ ] ( D N ) | ) = 2 3 ,
γ [ ] ( 2 , d ) = 1 | U | ( | R ̲ 2 [ ] ( D Y ) | + | R ̲ 2 [ ] ( D N ) | ) = 2 3 ,
γ [ ] ( 1 2 , d ) = 1 | U | ( | R ̲ 1 2 [ ] ( D Y ) | + | R ̲ 1 2 [ ] ( D N ) | ) = 5 6 .
Moreover,
O M ̲ 1 + 2 [ ] ( D Y ) = { x 1 , x 2 , x 6 } , O M ̲ 1 + 2 [ ] ( D N ) = { x 3 , x 5 } ,
then we have
γ o [ ] ( 1 + 2 , d ) = 1 | U | ( O M ̲ 1 + 2 [ ] ( D Y ) + O M ̲ 1 + 2 [ ] ( D N ) ) = 5 6 .
Moreover,
P M ̲ 1 + 2 [ ] ( D Y ) = { x 2 , x 6 } , P M ̲ 1 + 2 [ ] ( D N ) = { x 3 } ,
then we have
γ p [ ] ( 1 + 2 , d ) = 1 | U | ( O M ̲ 1 + 2 [ ] ( D Y ) + O M ̲ 1 + 2 [ ] ( D N ) ) = 1 2 .
Then
γ p [ ] ( 1 + 2 , d ) γ [ ] ( 1 , d ) γ o [ ] ( 1 + 2 , d ) γ [ ] ( 1 2 , d ) .

7. Conclusions

The rough set and the intuitionistic fuzzy set are two important tools to describe the uncertainty and vagueness of knowledge, and have been widely applied in the field of granular computing and attribute selection. And intervaling the intuitionisric fuzzy set is very helpful and meaningful. Through this paper, we have gotten a rough set model and three types of the multiple granulation rough set model in an IIFOIS. In addition, we have made the conclusion about differences and relationships among the dominance relation rough set, the OMGRS and the PMGRS in an IIFOIS. In this paper, we introduced two types of MGRS models in the IIFOIS, utilizing which granular structures of the lower and upper approximation operators of the target concept were addressed. Moreover, we investigated a number of improtant properties about the two types of MGRS models and several measures were also discussed, such as the rough measure and the quality of approxiamtion. Futhermore, a more general MGRS was provided and related properties and rough measures were discussed. In addition, the relationships and differences among the single granulation rough set, the three types of MGRS and their measures based on an IIFOIS. In order to help us to apply the MGRS model theory in actual problems, a real example was provided.
The feature selection is a hot research area at present. This paper has established a rough set theoretical model based on the IIFOIS. In our further research, on the basis of what we have done, we can do some related work around the feature selection. On the one hand, we can explore the attribute reduction including the lower and upper approximation reductions based on the rough model we have established in an IIFOIS. On the other hand, we can research dynamic updating approximations utilizing the results of this work. In addition, we can also use the results of this paper to do some works about multiple source information fusion.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z. and Z.L.; investigation, X.Z. and Z.L.; writing Z.L.; supervision, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Natural Science Foundation of China (Nos.61976245, 61172002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

Xiaoyan Zhang has received research grants from National Natural Science Foundation of China(Nos.61976245, 61172002). And this work was supported by these grants. Authors declare that they have no conflict of interest.

Appendix A

In this section, we will give corresponding proofs to some propositions in this article.
Proposition A1.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( m 2 | A T | ), X U , then the following results hold.
(OL1) O M ̲ i = 1 s A i [ ] ( X ) X   (Contraction)
(OU1) X O M ¯ i = 1 s A i [ ] ( X )   (Extention)
(OL2) O M ̲ i = 1 s A i [ ] ( X ) = O M ¯ i = 1 s A i [ ] ( X )   (Duality)
(OU2) O M ¯ i = 1 s A i [ ] ( X ) = O M ̲ i = 1 s A i [ ] ( X )   (Duality)
(OL3) O M ̲ i = 1 s A i [ ] ( ) =   (Normality)
(OU3) O M ¯ i = 1 s A i [ ] ( ) =   (Normality)
(OL4) O M ̲ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(OU4) O M ¯ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(OL5) O M ̲ i = 1 s A i [ ] ( X Y ) O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (L-F-multiplication)
(OU5) O M ¯ i = 1 s A i [ ] ( X Y ) O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (U-F-addition)
(OL6) X Y O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (Monotonicity)
(OU6) X Y O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (Monotonicity)
(OL7) O M ̲ i = 1 s A i [ ] ( X Y ) O M ̲ i = 1 s A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( Y )   (L-F-addition)
(OU7) O M ¯ i = 1 s A i = l [ ] ( X Y ) O M ¯ i = 1 s A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( Y )   (U-F-multiplication)
Proof. 
For convenience, we will prove that the results only when s = 2 , which there are two dominance relations ( A , B A T ) in an IIFOIS. All terms hold obviously when A = B . The following is the proof when A B .
(OL1) For any x O M ̲ A + B [ ] ( X ) , according to the Definition 1 we can know that [ x ] A [ ] X or [ x ] B [ ] X . Besides x [ x ] A [ ] and x [ x ] B [ ] . So we can have that x X . Therefore, O M ̲ A + B [ ] ( X ) X .
(OU1) For any x X , x [ x ] A [ ] and x [ x ] B [ ] , then [ x ] A [ ] X and [ x ] B [ ] X . It is also to say x O M ¯ A + B [ ] ( X ) . Therefore, X O M ¯ A + B [ ] ( X ) .
(OL2) For any x O M ̲ A + B [ ] ( X ) , we have that
x O M ̲ A + B [ ] ( X ) [ x ] A [ ] X o r [ x ] B [ ] X [ x ] A [ ] X = o r [ x ] B [ ] X = x O M ¯ A + B [ ] ( X ) x O M ¯ A + B [ ] ( X )
Therefore, O M ̲ A + B [ ] ( X ) = O M ¯ A + B [ ] ( X ) .
(OU2) For any x O M ¯ A + B [ ] ( X ) , we have that
x O M ¯ A + B [ ] ( X ) [ x ] A [ ] X o r [ x ] B [ ] X [ x ] A [ ] X a n d [ x ] B [ ] X x O M ̲ A + B [ ] ( X ) x O M ¯ A + B [ ] ( X )
Therefore, O M ¯ A + B [ ] ( X ) = O M ̲ A + B [ ] ( X ) .
Or for any x O M ¯ A + B [ ] ( X ) , by O L 2 we can know that O M ̲ A + B [ ] ( X ) = O M ¯ A + B [ ] ( X ) O M ̲ A + B [ ] ( X ) = O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( X ) = O M ̲ A + B [ ] ( X ) .
(OL3) By O L 1 , we can know that O M ̲ A + B [ ] ( ) . Meantime, O M ̲ A + B [ ] ( X ) . Therefore, O M ̲ A + B [ ] ( ) =
(OU3) If O M ¯ A + B [ ] ( ) , then there must be a x O M ¯ A + B [ ] ( ) . By Definition 1, [ x ] A [ ] and [ x ] B [ ] . It is obvious that this is a contradiction. Therefore, O M ¯ A + B [ ] ( ) = .
(OL4) O M ̲ A + B [ ] ( U ) = O M ̲ A + B [ ] ( ) = O M ¯ A + B [ ] ( ) = = U .
(OU4) O M ¯ A + B [ ] ( U ) = O M ¯ A + B [ ] ( ) = O M ̲ A + B [ ] ( ) = = U .
(OL5) For any x O M ̲ A + B [ ] ( X Y ) , we have that [ x ] A [ ] ( X Y ) or [ x ] B [ ] ( X Y ) by Definition 1. Furthermore, we can get that [ x ] A [ ] X and [ x ] A [ ] Y hold at the same time or [ x ] B [ ] X and [ x ] B [ ] Y hold at the same time. So not only [ x ] A [ ] X or [ x ] B [ ] X hold, but also [ x ] A [ ] Y or [ x ] B [ ] Y hold. It is also to say that x O M ̲ A + B [ ] ( X ) and x O M ̲ A + B [ ] ( Y ) . So x O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y ) . Therefore, O M ̲ A + B [ ] ( X Y ) O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y )
(OU5) For any x O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) , we have that x O M ¯ A + B [ ] ( X ) or x O M ¯ A + B [ ] ( Y ) , then [ x ] A [ ] X and [ x ] B [ ] X hold at same time, or [ x ] A [ ] Y and [ x ] B [ ] Y hold at same time. It is also to say that not only [ x ] A [ ] ( X Y ) , but also [ x ] B [ ] ( X Y ) . So x O M ¯ A + B [ ] ( X Y ) . Therefore, O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) O M ¯ A + B [ ] ( X Y ) .
(OL6) Since X Y , then X Y = X O M ̲ A + B [ ] ( X Y ) = O M ̲ A + B [ ] ( X ) . By O L 5 , O M ̲ A + B [ ] ( X Y ) O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y ) O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y ) . Therefore, O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y ) .
(OU6) Since X Y , then X Y = Y O M ¯ A + B [ ] ( X Y ) = O M ¯ A + B [ ] ( Y ) . By O U 5 , O M ¯ A + B [ ] ( X Y ) O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) . Therefore, O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) .
(OL7) Since X X Y and Y X Y , by O L 6 , O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( X Y ) and O M ̲ A + B [ ] ( Y ) O M ̲ A + B [ ] ( X Y ) . Therefore, O M ̲ A + B [ ] ( X ) O M ̲ A + B [ ] ( Y ) O M ̲ A + B [ ] ( X Y ) .
(OU7) Since X Y X and X Y Y , by O U 6 , O M ¯ A + B [ ] ( X Y ) O M ¯ A + B [ ] ( X ) and O M ¯ A + B [ ] ( X Y ) O M ¯ A + B [ ] ( Y ) . Therefore, O M ¯ A + B [ ] ( X Y ) O M ¯ A + B [ ] ( X ) O M ¯ A + B [ ] ( Y ) . □
Proposition A2.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A 1 , A 2 , , A s A T ( m 2 | A T | ), X , Y U , then the following results hold.
(PL1) P M ̲ i = 1 s A i [ ] ( X ) X   (Contraction)
(PU1) X P M ¯ i = 1 s A i [ ] ( X )   (Extention)
(PL2) P M ̲ i = 1 s A i [ ] ( X ) = P M ¯ i = 1 s A i [ ] ( X )   (Duality)
(PU2) P M ¯ i = 1 s A i [ ] ( X ) = P M ̲ i = 1 s A i [ ] ( X )   (Duality)
(PL3) P M ̲ i = 1 s A i [ ] ( ) =   (Normality)
(PU3) P M ¯ i = 1 s A i [ ] ( ) =   (Normality)
(PL4) P M ̲ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(PU4) P M ¯ i = 1 s A i [ ] ( U ) = U   (Co-normality)
(PL5) P M ̲ i = 1 s A i [ ] ( X Y ) = P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )   (L-F-multiplication)
(PU5) P M ¯ i = 1 s A i [ ] ( X Y ) = P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (U-F-addition)
(PL6) X Y P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )   (Monotonicity)
(PU6) X Y P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (Monotonicity)
(PL7) P M ̲ i = 1 s A i [ ] ( X Y ) P M ̲ i = 1 s A i [ ] ( X ) P M ̲ i = 1 s A i [ ] ( Y )   (L-F-addition)
(PU7) P M ¯ i = 1 s A i [ ] ( X Y ) P M ¯ i = 1 s A i [ ] ( X ) P M ¯ i = 1 s A i [ ] ( Y )   (U-F-multiplication)
Proof. 
For convenience, we will prove that the results only when s = 2 , which there are two dominance relations( A , B A T ) in an IIFOIS. All terms hold obviously when A = B . The following is the proof when A B .
(PL1) For any x P M ̲ A + B [ ] ( X ) , according to Definition 4 we can know that [ x ] A [ ] X and [ x ] B [ ] X . Besides x [ x ] A [ ] and x [ x ] B [ ] . So we can have that x X . Therefore, P M ̲ A + B [ ] ( X ) X .
(PU1) For any x X , x [ x ] A [ ] and x [ x ] B [ ] , then [ x ] A [ ] X and [ x ] B [ ] X . Besides x P M ¯ A + B [ ] ( X ) . Therefore, X O M ¯ A + B [ ] ( X ) .
(PL2) For any x P M ̲ A + B [ ] ( X ) , we have that
x P M ̲ A + B [ ] ( X ) [ x ] A [ ] X a n d [ x ] B [ ] X [ x ] A [ ] X = a n d [ x ] B [ ] X = x P M ¯ A + B [ ] ( X ) x P M ¯ A + B [ ] ( X )
Therefore, P M ̲ A + B [ ] ( X ) = P M ¯ A + B [ ] ( X ) .
(PU2) For any x P M ¯ A + B [ ] ( X ) , by P L 2 we can know that P M ̲ A + B [ ] ( X ) = P M ¯ A + B [ ] ( X ) P M ̲ A + B [ ] ( X ) = P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( X ) = P M ̲ A + B [ ] ( X ) .
(PL3) By P L 1 , we can know that P M ̲ A + B [ ] ( ) . Meantime, P M ̲ A + B [ ] ( X ) . Therefore, P M ̲ A + B [ ] ( ) =
(PU3) If P M ¯ A + B [ ] ( ) , then there must be a x P M ¯ A + B [ ] ( ) . By Definition 4, [ x ] A [ ] or [ x ] B [ ] . It is obvious that this is a contradiction. Therefore, P M ¯ A + B [ ] ( ) = .
(PL4) P M ̲ A + B [ ] ( U ) = P M ̲ A + B [ ] ( ) = P M ¯ A + B [ ] ( ) = = U .
(PU4) P M ¯ A + B [ ] ( U ) = P M ¯ A + B [ ] ( ) = P M ̲ A + B [ ] ( ) = = U .
(PL5) For any x P M ̲ A + B [ ] ( X Y ) , we have that
x P M ̲ A + B [ ] ( X Y ) [ x ] A [ ] X Y a n d [ x ] B [ ] X Y [ x ] A [ ] X , [ x ] A [ ] Y , [ x ] B [ ] X a n d [ B ] A [ ] Y [ x ] A [ ] X , [ x ] B [ ] X , [ x ] A [ ] Y a n d [ x ] B [ ] Y x P M ̲ A + B [ ] ( X ) a n d x P M ̲ A + B [ ] ( Y ) x P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y )
Therefore, P M ̲ A + B [ ] ( X Y ) = P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) .
(PU5) For any x P M ¯ A + B [ ] ( X Y ) , we have that
x P M ¯ A + B [ ] ( X Y ) [ x ] A [ ] ( X Y ) o r [ x ] B [ ] ( X Y ) [ x ] A [ ] X o r [ x ] A [ ] Y , o r [ x ] B [ ] X o r [ x ] B [ ] Y [ x ] A [ ] X o r [ x ] B [ ] X , o r [ x ] A [ ] Y o r [ x ] B [ ] Y x P M ¯ A + B [ ] ( X ) o r x P M ¯ A + B [ ] ( Y ) x P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( Y )
Therefore, P M ¯ A + B [ ] ( X Y ) = P M ¯ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) .
(PL6) Since X Y , then X Y = X P M ̲ A + B [ ] ( X Y ) = P M ̲ A + B [ ] ( X ) . By P L 5 , P M ̲ A + B [ ] ( X Y ) = P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) P M ̲ A + B [ ] ( X ) = P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) . Therefore, P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) .
(PU6) Since X Y , then X Y = Y P M ¯ A + B [ ] ( X Y ) = O M ¯ A + B [ ] ( Y ) . By P U 5 , P M ¯ A + B [ ] ( X Y ) = P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( Y ) P M ¯ A + B [ ] ( X ) = P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( Y ) . Therefore, P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( Y ) .
(PL7) Since X X Y and Y X Y , by P L 6 , P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( X Y ) and P M ̲ A + B [ ] ( Y ) P M ̲ A + B [ ] ( X Y ) . Therefore, P M ̲ A + B [ ] ( X ) P M ̲ A + B [ ] ( Y ) P M ̲ A + B [ ] ( X Y ) .
(PU7) Since X Y X and X Y Y , by P U 6 , P M ¯ A + B [ ] ( X Y ) P M ¯ A + B [ ] ( X ) and P M ¯ A + B [ ] ( X Y ) P M ¯ A + B [ ] ( Y ) . Therefore, P M ¯ A + B [ ] ( X Y ) P M ¯ A + B [ ] ( X ) P M ¯ A + B [ ] ( Y ) . □
Proposition A3.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X , Y P ( U ) , x U . The following properties about the support feature function S X A i ( x ) are established.
(1) S X A i ( x ) = 1 , [ x ] A i [ ] X = , 0 , [ x ] A i [ ] X .
(2) S A i ( x ) = 0 , S U A i ( x ) = 1 .
(3) S X Y A i ( x ) S X A i ( x ) S Y A i ( x ) .
(4) S X Y A i ( x ) = S X A i ( x ) S Y A i ( x ) .
(5) X Y S X A i ( x ) S Y A i ( x ) .
(6) X Y S X A i ( x ) S Y A i ( x ) .
” and “” represent the operation of taking small and taking big, respectively.
Proof. 
(1) By Definition 7, [ x ] A i [ ] X [ x ] A i [ ] X = and [ x ] A i [ ] X [ x ] A i [ ] X
(2) By Definition 7, for any x U [ x ] A i [ ] , it is also to say that S A i ( x ) = 0 . For any x U [ x ] A i [ ] U , it is also to say that S U A i ( x ) = 1 .
(3) For any Z P ( U ) , Z X or Z Y Z ( X Y ) . So
S X A i ( x ) S Y A i ( x ) = 1 S X A i ( x ) = 1 o r S Y A i ( x ) = 1 [ x ] A i [ ] X o r [ x ] A i [ ] X [ x ] A i [ ] ( X Y ) S X Y A i ( x ) = 1
If X Y = U , it is obvious that S X Y A i ( x ) = S U A i ( x ) = 1 , otherwise we have ( X Y ) = ( X ) ( Y ) . Then
S X A i ( x ) = 0 [ x ] A i [ ] ( X Y ) [ x ] A i [ ] [ ( X ) ( Y ) ] [ x ] A i [ ] ( X ) a n d [ x ] A i [ ] ( Y ) S X A i ( x ) = 0 a n d S Y A i ( x ) = 0 S X A i S Y A i ( x ) = 0
As a result, for any x U , S X Y A i ( x ) S X A i ( x ) S Y A i ( x ) .
(4) For any Z P ( U ) , Z X and Z Y Z ( X Y ) . So
S X Y A i ( x ) = 0 [ x ] A i [ ] ( X Y ) [ x ] A i [ ] [ ( X ) ( Y ) ] [ x ] A i [ ] ( X ) a n d [ x ] A i [ ] ( Y ) S X A i ( x ) = 0 a n d S Y A i ( x ) = 0 S X A i S Y A i ( x ) = 0
and
S X Y A i ( x ) = 1 [ x ] A i [ ] X Y [ x ] A i [ ] X a n d [ x ] A i [ ] Y S X A i ( x ) = 1 a n d S Y A i ( x ) = 1 S X A i S Y A i ( x ) = 1
As a result, for any x U , S X Y A i ( x ) = S X A i ( x ) S Y A i ( x ) .
(5) If S Y A i ( x ) = 0 [ x ] A i [ ] Y [ x ] A i [ ] X S X A i ( x ) = 0 . If S X A i ( x ) = 1 [ x ] A i [ ] X Y S Y A i ( x ) = 1 . In that way, X Y S X A i ( x ) S Y A i ( x ) .
(6) Similarly, if S X A i ( x ) = 0 [ x ] A i [ ] X [ x ] A i [ ] Y S Y A i ( x ) = 0 . If S Y A i ( x ) = 1 [ x ] A i [ ] Y = [ x ] A i [ ] X = S X A i ( x ) = 1 . Thus, X Y S X A i ( x ) S Y A i ( x ) . □
Proposition A4.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] . For any X , Y P ( U ) , the following results hold.
(1L) G M ̲ i = 1 s A i [ ] ( X ) β = G M ¯ i = 1 s A i [ ] ( X ) β   (Duality)
(1U) G M ¯ i = 1 s A i [ ] ( X ) β = G M ̲ i = 1 s A i [ ] ( X ) β   (Duality)
(2L) G M ̲ i = 1 s A i [ ] ( X ) β X   (Contraction)
(2U) X G M ¯ i = 1 s A i [ ] ( X ) β   (Extention)
(3L) G M ̲ i = 1 s A i [ ] ( ) β =   (Normality)
(3U) G M ¯ i = 1 s A i [ ] ( ) β =   (Normality)
(4L) G M ̲ i = 1 s A i [ ] ( U ) β = U   (Co-normality)
(4U) G M ¯ i = 1 s A i [ ] ( U ) β = U   (Co-normality)
(5L) X Y G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (Monotonicity)
(5U) X Y G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (Monotonicity)
(6L) G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (L-F-multiplication)
(6U) G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (L-F-addition)
(7L) G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β   (L-F-addition)
(7U) G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β   (L-F-multiplication)
Proof. 
(1L) By Definition 8, we have that
x G M ¯ i = 1 s A i [ ] ( X ) β i = 1 s ( 1 S X A i ( x ) ) s 1 β i = 1 s S X A i ( x ) s β x G M ̲ i = 1 s A i [ ] ( X ) β .
(1U) By Definition 8, we have that
x G M ̲ i = 1 s A i [ ] ( X ) β i = 1 s S X A i ( x ) s < 1 β i = 1 s ( 1 S X A i ( x ) ) s > 1 β x G M ¯ i = 1 s A i [ ] ( X ) β .
(2L) For any x G M ̲ i = 1 s A i [ ] ( X ) β , we can know that i = 1 s S X A i ( x ) s β > 0 . So there must be a i s such that [ x ] A i [ ] X . Therefore, G M ̲ i = 1 s A i [ ] ( X ) β X .
(2U) By Proposition 9 (1L) and (2L), we have that G M ¯ i = 1 s A i [ ] ( X ) β = G M ̲ i = 1 s A i [ ] ( X ) β X . Therefore, X G M ¯ i = 1 s A i [ ] ( X ) β .
(3L), (4L) By Proposition 8, we can know that S A i ( x ) = 0 , S U A i ( x ) = 1 , then
G M ̲ i = 1 s A i [ ] ( ) β = { x U | i = 1 s S A i ( x ) s = i = 1 s 0 s = 0 β } = ,
G M ̲ i = 1 s A i [ ] ( U ) β = { x U | i = 1 s S U A i ( x ) s = i = 1 s 1 s = 1 β } = U .
(3U), (4U) By Proposition 9 (1L) and (1U), we have
G M ¯ i = 1 s A i [ ] ( ) β = G M ̲ i = 1 s A i [ ] ( ) β = G M ̲ i = 1 s A i [ ] ( U ) β = U = ,
G M ¯ i = 1 s A i [ ] ( U ) β = G M ̲ i = 1 s A i [ ] ( U ) β = G M ̲ i = 1 s A i [ ] ( ) β = = U .
(5L) For any x G M ̲ i = 1 s A i [ ] ( X ) β , we have i = 1 s S X A i ( x ) s β . By Proposition 8, X Y S X A i ( x ) S Y A i ( x ) , then i = 1 s S Y A i ( x ) s i = 1 s S X A i ( x ) s β x G M ̲ i = 1 s A i [ ] ( Y ) β . Consequently, X Y G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β .
(5U) For any x G M ¯ i = 1 s A i [ ] ( X ) β , we have i = 1 s ( 1 S X A i ( x ) ) s > 1 β . By Proposition 8, X Y Y X S Y A i ( x ) S X A i ( x ) , then i = 1 s ( 1 S Y A i ( x ) ) s i = 1 s S X A i ( x ) s > 1 β x G M ¯ i = 1 s A i [ ] ( Y ) β . Consequently, X Y G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β .
(6L) For any x G M ̲ i = 1 s A i [ ] ( X Y ) β , we have that
x G M ̲ i = 1 s A i [ ] ( X Y ) β i = 1 s S X Y A i ( x ) s = i = 1 s S X A i ( x ) i = 1 s S Y A i ( x ) s β i = 1 s S X A i ( x ) s β a n d i = 1 s S Y A i ( x ) s β x G M ̲ i = 1 s A i [ ] ( X ) β a n d x G M ̲ i = 1 s A i [ ] ( Y ) β x G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β
Hence, G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β .
(6U) G M ¯ i = 1 s A i [ ] ( X Y ) β = G M ̲ i = 1 s A i [ ] [ ( X Y ) ] β = G M ̲ i = 1 s A i [ ] [ ( X ) ( Y ) ] β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β = G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β .
Hence, G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β .
(7L) X X Y and Y X Y G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( X Y ) β and G M ̲ i = 1 s A i [ ] ( Y ) β G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β G M ̲ i = 1 s A i [ ] ( X Y ) β .
So, G M ̲ i = 1 s A i [ ] ( X Y ) β G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( Y ) β .
(7U) X Y X and X Y Y G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β and G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( Y ) β G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β .
So, G M ¯ i = 1 s A i [ ] ( X Y ) β G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 s A i [ ] ( Y ) β .
Proposition A5.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). For any α ( 0.5 , 1 ] , β ( 0.5 , 1 ] and α β , t s , X P ( U ) , then the following properties hold.
(1) G M ̲ i = 1 s A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( X ) α .
(2) G M ¯ i = 1 s A i [ ] ( X ) α G M ¯ i = 1 s A i [ ] ( X ) β .
(3) G M ̲ i = 1 t A i [ ] ( X ) β G M ̲ i = 1 s A i [ ] ( X ) β .
(4) G M ¯ i = 1 s A i [ ] ( X ) β G M ¯ i = 1 t A i [ ] ( X ) β .
Proof. 
It can be obtained easily by Definitions 7 and 8. □
Proposition A6.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). X P ( U ) . Then the following properties hold.
(1) O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(2) O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
Proof. 
Similarily, we will prove these properties only about two dominance relations A , B A T in an IIFOIS for convenience.
(1) For any x O M ̲ A + B [ ] ( X ) , by Definition 1, we have that [ x ] A [ ] X or [ x ] B [ ] X . Besides, A A B and B A B [ x ] A B [ ] [ x ] A [ ] and [ x ] A B [ ] [ x ] B [ ] by Proposition 1. So [ x ] A B [ ] X , it is also to say that x R ̲ A + B [ ] ( X ) . Therefore, O M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) .
(2) By Proposition 11 (1), we can know that O M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) . Then O M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) , it is also to say that O M ¯ A + B [ ] ( X ) R ¯ A + B [ ] ( X ) by Proposition 3 (2L) and Proposition 6 ( O L 2 ). Therefore, O M ¯ A + B [ ] ( X ) R ¯ A + B [ ] ( X ) .
(3) For any x P M ̲ A + B [ ] ( X ) , by Definition 4, we have that [ x ] A [ ] X and [ x ] B [ ] X . Besides, A A B and B A B [ x ] A B [ ] [ x ] A [ ] and [ x ] A B [ ] [ x ] B [ ] by Proposition 1. So [ x ] A B [ ] X , it is also to say that x R ̲ A + B [ ] ( X ) . Therefore, P M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) .
(4) By Proposition 11 (3), we can know that P M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) . Then P M ̲ A + B [ ] ( X ) R ̲ A + B [ ] ( X ) , it is also to say that P M ¯ A + B [ ] ( X ) R ¯ A + B [ ] ( X ) by Propositions 3 (2L) and 7 ( P L 2 ). Therefore, P M ¯ A + B [ ] ( X ) R ¯ A + B [ ] ( X ) .
Proposition A7.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U . Then the following properties hold.
(1) O M ̲ i = 1 s A i [ ] ( X ) = i = 1 s R ̲ A i [ ] ( X ) .
(2) O M ¯ i = 1 s A i [ ] ( X ) = i = 1 s R ¯ A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) = i = 1 s R ̲ A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) = i = 1 s R ¯ A i [ ] ( X ) .
Proof. 
Without loss of generality, we will prove these properties only about two dominance relations A , B A T in an IIFOIS for convenience.
(1)
For any x O M ̲ A + B [ ] ( X ) , we have that
x O M ̲ A + B [ ] ( X ) [ x ] A [ ] X o r [ x ] B [ ] X x R ̲ A [ ] ( X ) o r x R ̲ B [ ] ( X ) x R ̲ A [ ] ( X ) R ̲ B [ ] ( X )
Consequently, O M ̲ A + B [ ] ( X ) = R ̲ A [ ] ( X ) R ̲ B [ ] ( X ) .
(2)
For any x O M ̲ A + B [ ] ( X ) , we have that
x O M ¯ A + B [ ] ( X ) [ x ] A [ ] X a n d [ x ] B [ ] X x R ̲ A [ ] ( X ) a n d x R ̲ B [ ] ( X ) x R ̲ A [ ] ( X ) R ̲ B [ ] ( X )
Consequently, O M ¯ A + B [ ] ( X ) = R ̲ A [ ] ( X ) R ̲ B [ ] ( X ) .
(3)
For any x P M ̲ A + B [ ] ( X ) , we have that
x P M ̲ A + B [ ] ( X ) [ x ] A [ ] X a n d [ x ] B [ ] X x R ̲ A [ ] ( X ) a n d x R ̲ B [ ] ( X ) x R ̲ A [ ] ( X ) R ̲ B [ ] ( X )
So, P M ̲ A + B [ ] ( X ) = R ̲ A [ ] ( X ) R ̲ B [ ] ( X ) .
(4)
For any x P M ̲ A + B [ ] ( X ) , we have that
x P M ¯ A + B [ ] ( X ) [ x ] A [ ] X o r [ x ] B [ ] X x R ̲ A [ ] ( X ) o r x R ̲ B [ ] ( X ) x R ̲ A [ ] ( X ) R ̲ B [ ] ( X )
So, P M ¯ A + B [ ] ( X ) = R ̲ A [ ] ( X ) R ̲ B [ ] ( X ) .
Proposition A8.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U , Y U . Then we have
(1) O M ̲ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ̲ A i [ ] ( X ) R ̲ A i [ ] ( Y ) ) .
(2) O M ¯ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ¯ A i [ ] ( X ) R ¯ A i [ ] ( Y ) ) .
(3) P M ̲ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ̲ A i [ ] ( X ) R ̲ A i [ ] ( Y ) ) .
(4) P M ¯ i = 1 s A i [ ] ( X Y ) = i = 1 s ( R ¯ A i [ ] ( X ) R ¯ A i [ ] ( Y ) ) .
Proof. 
By Proposition 3 (5L), (5U) and Proposition 12, it can be obtained easily. □
Proposition A9.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X P ( U ) , the lower and upper approximations of the OMGRS and the PMGRS by the support festure function are
(1) O M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s S X A i ( x ) s > 0 } , O M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( 1 S X A i ( x ) ) s 1 } .
(2) P M ̲ i = 1 s A i [ ] ( X ) = { x U | i = 1 s S X A i ( x ) s 1 } , P M ¯ i = 1 s A i [ ] ( X ) = { x U | i = 1 s ( 1 S X A i ( x ) ) s > 0 } .
Proof. 
It can be obtained easily from the definition of the opetimistic multiple granulation rough set, the pessimistic multiple granulation rough set and the support feasure function. □
Proposition A10.
Let I [ ] ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), β ( 0.5 , 1 ] , X U . Then we have
(1) P M ̲ i = 1 s A i [ ] ( X ) G M ̲ i = 1 s A i [ ] ( X ) β O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(2) P M ¯ i = 1 s A i [ ] ( X ) G M ̲ i = 1 s A i [ ] ( X ) β O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
(3) P M ̲ i = 1 s A i [ ] ( X ) R ̲ A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) .
(4) P M ¯ i = 1 s A i [ ] ( X ) R ¯ A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) .
Proof. 
By Definitions 1 and 4 and Propositions 11 and 12, it can be obtained easily.
It is worth mentioning that there is no clear fixed inclusion relationship between the approximation set G M i = 1 s A i [ ] ( X ) β and arbitrary R A i [ ] ( X ) . □
Proposition A11.
Let I [ ] = ( U , A T , V , f ) be an IIFOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ), X U . Then
(1) ρ [ ] ( R A i [ ] , X ) ρ o [ ] ( i = 1 s A i , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
(2) ρ p [ ] ( i = 1 s A i , X ) ρ [ ] ( R A i [ ] , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
(3) ρ p [ ] ( i = 1 s A i , X ) ρ [ ] ( R A i [ ] , X ) ρ o [ ] ( i = 1 s A i , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) .
Proof. 
(1) By Propositions 12 and 14, we have that
R ̲ A i [ ] ( X ) O M ̲ i = 1 s A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) ,
and
R ¯ A i [ ] ( X ) O M ¯ i = 1 s A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) ,
then
| R ̲ A i [ ] ( X ) | | R ¯ A i [ ] ( X ) | | O M ̲ i = 1 s A i [ ] ( X ) | | O M ¯ i = 1 s A i [ ] ( X ) | | R ̲ i = 1 s A i [ ] ( X ) | | R ¯ i = 1 s A i [ ] ( X ) | .
Therefore, ρ [ ] ( R A i [ ] , X ) ρ o [ ] ( i = 1 s A i , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) by Definition 2.
(2) By Propositions 12 and 14, we have that
P M ̲ i = 1 s A i [ ] ( X ) R ̲ A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) ,
and
P M ¯ i = 1 s A i [ ] ( X ) R ¯ A i [ ] ( X ) R ¯ i = 1 s A i [ ] ( X ) ,
then
| P M ̲ i = 1 s A i [ ] ( X ) | | P M ¯ i = 1 s A i [ ] ( X ) | | R ̲ A i [ ] ( X ) | | R ¯ A i [ ] ( X ) | | R ̲ i = 1 s A i [ ] ( X ) | | R ¯ i = 1 s A i [ ] ( X ) | .
Therefore, ρ p [ ] ( i = 1 s A i , X ) ρ [ ] ( R A i [ ] , X ) ρ [ ] ( R i = 1 s A i [ ] , X ) by Definition 5.
(3) It can be obtained easily from the information above. □
Proposition A12.
Let I [ ] = ( U , C T d , V , f ) be an IIFDOIS, A i A T , i = 1 , 2 , , s ( s 2 | A T | ). Then
(1) γ [ ] ( R A i [ ] , d ) γ o [ ] ( i = 1 s A i , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
(2) γ p [ ] ( i = 1 s A i , d ) γ [ ] ( R A i [ ] g e , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
(3) γ p [ ] ( i = 1 s A i , d ) γ [ ] g e ( R A i [ ] , d ) γ o [ ] ( i = 1 s A i , d ) γ [ ] ( R i = 1 s A i [ ] , d ) .
Proof. 
(1) For any D j U / d = { D 1 , D 2 , , D k } , by Propositions 12 and 14, we have that
R ̲ A i [ ] ( D j ) O M ̲ i = 1 s A i [ ] ( D j ) R ̲ i = 1 s A i [ ] ( D j ) ,
then
| R ̲ A i [ ] ( D j ) | | O M ̲ i = 1 s A i [ ] ( D j ) | | R ̲ i = 1 s A i [ ] ( D j ) | .
Hence, γ [ ] ( R A i [ ] , d ) γ o [ ] ( i = 1 s A i , d ) γ [ ] ( R i = 1 s A i [ ] , d ) by Definition 3.
(2) For any D j U / d = { D 1 , D 2 , , D k } , by Propositions 12 and 14, we have that
P M ̲ i = 1 s A i [ ] ( X ) R ̲ A i [ ] ( X ) R ̲ i = 1 s A i [ ] ( X ) ,
then
| P M ̲ i = 1 s A i [ ] ( X ) | | R ̲ A i [ ] ( X ) | | R ̲ i = 1 s A i [ ] ( X ) | .
Hence, γ p [ ] ( i = 1 s A i , d ) γ [ ] ( R A i [ ] , d ) γ [ ] ( R i = 1 s A i [ ] , d ) by Definition 6.
(3) It can be obtained easily from the information above. □

References

  1. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Chan, C.C. A rough set approach to attribute generalization in data mining. Inf. Sci. 1998, 107, 169–176. [Google Scholar] [CrossRef]
  3. Kim, D.J. Data Classification based on Tolerant Rough Set. Pattern Recognit. 2001, 34, 1613–1624. [Google Scholar] [CrossRef]
  4. An, L.P.; Wu, Y.H.; Tong, L.Y. Conflict analysis and negotiation model based on rough set theory. J. Univ. Sci. Technol. 2002, 1, 91–95. [Google Scholar]
  5. Pawlak, Z. Some remarks on conflict analysis. Eur. J. Oper. Res. 2003, 166, 649–654. [Google Scholar] [CrossRef] [Green Version]
  6. Sun, B.Z.; Ma, W.M.; Zhao, H.Y. Rough set-based conflict analysis model and method over two universes. Inf. Sci. 2016, 372, 111–125. [Google Scholar] [CrossRef]
  7. Cyran, K.A.; Mrozek, A. Rough sets in hybrid methods for pattern recognition. Int. J. Intell. Syst. 2001, 15, 149–168. [Google Scholar] [CrossRef]
  8. Swiniarski, R.W.; Skowron, A. Rough set methods in feature selection and recognition. Pattern Recognit. Lett. 2003, 24, 833–849. [Google Scholar] [CrossRef] [Green Version]
  9. Miao, D.Q.; Wang, Y. Rough sets based approach for multivariate decision tree contruction. J. Softw. 1997, 8, 425–431. [Google Scholar]
  10. Wang, Y.; Miao, D.Q.; Zhou, Y.J. A summary of the theory and application on rough set. Pattern Recognit. Artif. Intell. 1996, 9, 337–344. [Google Scholar]
  11. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  12. Sang, B.B.; Zhang, X.Y. The Approach to Probabilistic Decision-Theoretic Rough Set in Intuitionistic Fuzzy Information Systems. Int. Inf. Manag. Comput. Sci. 2020, 12, 1–26. [Google Scholar] [CrossRef] [Green Version]
  13. Xu, W.H.; Wang, Q.R.; Luo, S.Q. Multi-granulation fuzzy rough sets. J. Int. Fuzzy. Syst. 2014, 26, 1323–1340. [Google Scholar] [CrossRef]
  14. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  15. Coker, D. Fuzzy rough sets are intuitionistic L-fuzzy sets. Fuzzy Sets Syst. 1998, 96, 381–383. [Google Scholar] [CrossRef]
  16. Sang, B.B.; Zhang, X.Y.; Xu, W.H. Attribute reduction of relative knowledge granularity in intuitionistic fuzzy ordered decision table. Filomat 2018, 32, 1727–1736. [Google Scholar] [CrossRef] [Green Version]
  17. Xu, W.H.; Liu, Y.F.; Sun, W.X. Upper approximation reduction based on intuitionistic fuzzy T equivalence informatin system. Rough Sets Knowl. Technol. 2012, 4, 55–62. [Google Scholar]
  18. Zhou, L.; Wu, W.Z. On generalized intuitionistic fuzzy rough approximation operators. Inf. Sci. 2008, 178, 2448–2465. [Google Scholar] [CrossRef]
  19. Zhou, L.; Wu, W.Z. Characterization of rough set approximations in Atanassov intuitionistic fuzzy set theory. Comput. Math. Appl. 2011, 62, 282–296. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, Z.M. Generalized intuitionistic fuzzy rough sets based on intuitionistic fuzzy coverings. Inf. Sci. 2012, 198, 186–206. [Google Scholar] [CrossRef]
  21. Xu, W.H.; Sun, W.X.; Zhang, X.Y. Multiple granulation rough set approach to ordered information systems. Int. J. Gen. Syst. 2012, 41, 475–501. [Google Scholar] [CrossRef]
  22. Qian, Y.H.; Liang, J.Y.; Yao, Y.Y. MGRS: A multi-granulation rough set. Inf. Sci. 2009, 180, 949–970. [Google Scholar] [CrossRef]
  23. Xu, W.H.; Zhang, W.X. Measuring roughness of generalized rough sets induced by a covering. Fuzzy Sets Syst. 2007, 158, 2443–2455. [Google Scholar] [CrossRef]
  24. Xu, W.H.; Zhang, X.Y.; Zhang, W.X. Knowledge granulation, knowledge entropy and knowledge uncertainty measure in ordered information systems. Comput. Inform. 2009, 9, 1244–1251. [Google Scholar]
  25. Xu, W.H.; Zhang, X.Y.; Zhong, J.M. Attribute reduction in ordered information systems based on evidence theory. Knowl. Inf. Syst. 2010, 25, 169–184. [Google Scholar] [CrossRef]
  26. Xu, W.H. The ordered information system and the rough set. Sci. Pre: Beijing/China. 2013. Available online: http://www.ecsponline.com/goods.php?id=154502 (accessed on 22 March 2021).
  27. Yao, Y.Y. Granular Computing: Basic issues and possible solutions. Comput. Inf. 2002, 1, 186–189. [Google Scholar]
  28. Khan, M.A.; Ma, M.H. A Modal Logic for Multiple-Source Tolerance Approximation Spaces. In Lecture Notes in Computer Science, Proceedings of the Logic and Its Applications, Delhi, India, 5–11 January 2011; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  29. Yu, Y.Y.; Zeng, X.L.; Sun, X.X. Interval-valued information systems based on dominance relation and its attribute reduction. Comput. Eng. Appl. 2011, 47, 122–124. [Google Scholar]
  30. Zhang, N.; Miao, D.Q.; Yue, X.D. Approaches to knowledge reduction in interval-valued information system. Comput. Res. Dev. 2010, 47, 1362–1371. [Google Scholar]
  31. Qian, Y.H.; Liang, J.Y.; Dang, C.Y. Interval ordered information systems. Comput. Math. Appl. 2008, 56, 1994–2009. [Google Scholar] [CrossRef] [Green Version]
  32. Zeng, X.L.; Chen, S.; Mei, L.C. Interval ordered information system and its attribute reduction algorithm. Comput. Eng. 2010, 36, 62–63. [Google Scholar]
  33. Huang, B.; Wei, D.K.; Li, H.X. Using a rough set model to extract rules in dominance-based interval-valued intuitionistic fuzzy information systems. Inf. Sci. 2013, 221, 215–229. [Google Scholar] [CrossRef]
  34. Sang, B.B.; Xu, W.H. Rough membership measure in intuitionistic fuzzy information system. In Proceedings of the 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017. [Google Scholar]
  35. Huang, B.; Zhuang, Y.L.; Li, H.X. A dominance intuitionistic fuzzy-rough set approach and its applications. Appl. Math. Model. 2013, 37, 7128–7141. [Google Scholar] [CrossRef]
  36. Huang, B.; Li, H.X.; Wei, D.K. Dominance-based rough set model in intuitionistic fuzzy information systems. Knowl-Based. Syst. 2012, 28, 115–123. [Google Scholar] [CrossRef]
  37. Huang, B.; Guo, C.X.; Zhuang, Y.L. Intuitionistic fuzzy multigranulation rough sets. Inf. Sci. 2014, 277, 299–320. [Google Scholar] [CrossRef]
Table 1. An interval-value intuitionistic fuzzy ordered information system.
Table 1. An interval-value intuitionistic fuzzy ordered information system.
U a 1 a 2 a 3 a 4 d
x 1 < [ 0.32 , 0.37 ] , [ 0.20 , 0.31 ] > < [ 0.53 , 0.62 ] , [ 0.10 , 0.18 ] > < [ 0.26 , 0.35 ] , [ 0.30 , 0.38 ] > < [ 0.53 , 0.62 ] , [ 0.16 , 0.20 ] > Y
x 2 < [ 0.39 , 0.42 ] , [ 0.11 , 0.12 ] > < [ 0.35 , 0.37 ] , [ 0.28 , 0.35 ] > < [ 0.10 , 0.20 ] , [ 0.50 , 0.60 ] > < [ 0.47 , 0.55 ] , [ 0.25 , 0.32 ] > Y
x 3 < [ 0.18 , 0.25 ] , [ 0.39 , 0.52 ] > < [ 0.30 , 0.36 ] , [ 0.37 , 0.42 ] > < [ 0.55 , 0.62 ] , [ 0.15 , 0.20 ] > < [ 0.38 , 0.45 ] , [ 0.30 , 0.40 ] > N
x 4 < [ 0.27 , 0.33 ] , [ 0.25 , 0.35 ] > < [ 0.20 , 0.32 ] , [ 0.45 , 0.52 ] > < [ 0.48 , 0.53 ] , [ 0.17 , 0.30 ] > < [ 0.30 , 0.40 ] , [ 0.35 , 0.45 ] > N
x 5 < [ 0.35 , 0.40 ] , [ 0.15 , 0.25 ] > < [ 0.41 , 0.49 ] , [ 0.25 , 0.32 ] > < [ 0.30 , 0.42 ] , [ 0.25 , 0.33 ] > < [ 0.60 , 0.82 ] , [ 0.12 , 0.15 ] > N
x 6 < [ 0.43 , 0.67 ] , [ 0.08 , 0.12 ] > < [ 0.45 , 0.50 ] , [ 0.20 , 0.24 ] > < [ 0.62 , 0.72 ] , [ 0.08 , 0.18 ] > < [ 0.36 , 0.43 ] , [ 0.30 , 0.40 ] > Y
Table 2. The support feature function of objects in Table 1.
Table 2. The support feature function of objects in Table 1.
U S X A 1 ( x ) S X A 1 ( x ) S X A 2 ( x ) S X A 2 ( x ) S X A 3 ( x ) S X A 3 ( x ) S X A 4 ( x ) S X A 4 ( x )
x 1 00100000
x 2 10000000
x 3 00000000
x 4 00000000
x 5 00000001
x 6 10101000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Z.; Zhang, X. Multiple Granulation Rough Set Approach to Interval-Valued Intuitionistic Fuzzy Ordered Information Systems. Symmetry 2021, 13, 949. https://doi.org/10.3390/sym13060949

AMA Style

Li Z, Zhang X. Multiple Granulation Rough Set Approach to Interval-Valued Intuitionistic Fuzzy Ordered Information Systems. Symmetry. 2021; 13(6):949. https://doi.org/10.3390/sym13060949

Chicago/Turabian Style

Li, Zhen, and Xiaoyan Zhang. 2021. "Multiple Granulation Rough Set Approach to Interval-Valued Intuitionistic Fuzzy Ordered Information Systems" Symmetry 13, no. 6: 949. https://doi.org/10.3390/sym13060949

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop