Next Article in Journal
Symmetry Breaking in Self-Assembled Nanoassemblies
Previous Article in Journal
Neutrino Flavor Transitions as Mass State Transitions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Reductions of Bipolar Fuzzy Soft Sets with Their Decision-Making Algorithms

1
Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan
2
Department of Mathematics, College of Science, Jazan University, New Campus, P.O. Box 2097, Jazan 45142, Saudi Arabia
3
BORDA Research Unit and IME, Universidad de Salamanca, 37008 Salamanca, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(8), 949; https://doi.org/10.3390/sym11080949
Submission received: 12 June 2019 / Revised: 11 July 2019 / Accepted: 12 July 2019 / Published: 25 July 2019

Abstract

:
Parameter reduction is a very important technique in many fields, including pattern recognition. Many reduction techniques have been reported for fuzzy soft sets to solve decision-making problems. However, there is almost no attention to the parameter reduction of bipolar fuzzy soft sets, which take advantage of the fact that membership and non-membership degrees play a symmetric role. This methodology is of great importance in many decision-making situations. In this paper, we provide a novel theoretical approach to solve decision-making problems based on bipolar fuzzy soft sets and study four types of parameter reductions of such sets. Parameter reduction algorithms are developed and illustrated through examples. The experimental results prove that our proposed parameter reduction techniques delete the irrelevant parameters while keeping definite decision-making choices unchanged. Moreover, the reduction algorithms are compared regarding the degree of ease of computing reduction, applicability, exact degree of reduction, applied situation, and multi-use of parameter reduction. Finally, a real application is developed to describe the validity of our proposed reduction algorithms.

1. Introduction

There are many decision-making issues in physical sciences, applied sciences, social sciences, and life sciences often containing datasets having uncertain and vague information. Fuzzy set theory [1] and rough set theory [2] are classical mathematical tools to characterize uncertain and inaccurate data, but as indicated in [3,4], each of these theories lacks theoretical parametric tools. Molodtsov [3] initiated the concept of the soft set as a new mathematical tool for handling vague and uncertain information. Molodtsov [3] efficiently implemented soft set theory in multiple directions, for example operations research, game theory, Riemann integral, and probability. Currently, the research on soft set theory is proceeding rapidly and has achieved many fruitful results. Some fundamental algebraic properties of soft sets were proposed by Maji et al. [5]. Feng et al. [6] proposed new hybrid models by combining fuzzy sets, rough sets, and soft sets. Maji and Roy [7] presented a technique to solve decision-making problem-based fuzzy soft sets. Xiao et al. [8] developed a forecasting method based the fuzzy soft set model.
Henceforth, much research on parameter reduction has been completed, and several results have been derived [9,10,11,12,13,14]. Research based on soft set theory to solve decision-making problems derives from the concept of parameter reduction. The reduction of parameters in soft set theory is designed to remove redundant parameters while preserving the original decision choices. Maji et al. [15] first solved soft set decision-making problems using rough set-based reduction [16]. To improve decision-making problems in [15], Chen et al. [17] and Kong et al. [18] respectively proposed parameterization reduction and normal parameter reduction of soft sets. Ma et al. [19] proposed a new efficient algorithm of normal parameter reduction to improve [15,16,17]. Roy and Maji [7] proposed a new method for dealing with decision-making problems with fuzzy soft sets. The method deals with a comparison table derived from a fuzzy soft set in the sense of parameters to make a decision. Kong et al. [20] indicated that the Roy and Maji technique [7] was inaccurate, and they proposed a modified approach to solve this issue. They described the effectiveness of the Roy and Maji technique [7] and demonstrated its boundaries. Ma et al. [21] proposed extensive parameter reduction methods for interval-valued fuzzy soft sets. Decision-making research for the reduction of fuzzy soft sets has been given considerable attention. Using the idea of the level soft set, Feng et al. [22] gave the idea of the parameter reduction of fuzzy soft sets and proposed an adaptable method to decision-making based on fuzzy soft sets. Moreover, Feng et al. [23] presented another intuition about decision-making-based interval-valued fuzzy soft sets. Jiang et al. [24] proposed a reduction method of intuitionistic fuzzy soft sets for decision-making using the level soft sets of intuitionistic fuzzy sets. The theory of fuzzy systems has rich applications in different areas, including engineering [25,26,27]. Zhang [28,29] first proposed the idea of bipolar fuzzy sets (Yin Yang bipolar fuzzy sets) in the space { ( x , y ) | ( x , y ) [ 1 , 0 ] × [ 0 , 1 ] } as an extension of fuzzy sets. In the case of bipolar fuzzy sets, membership degree range is enlarged from the interval [ 0 , 1 ] [ 1 , 1 ] . The idea behind this description is related to the existence of bipolar information. For example, profit and loss, feedback and feed-forward, competition and cooperation, etc., are usually two aspects of decision-making. In Chinese medicine, Yin and Yang are the two sides. Yin is the negative side of a system, and Yang is the positive side of a system. Bipolar fuzzy set theory has many applications in different fields, including pattern recognition and machine learning. Saleem et al. [30] presented a new hybrid model, namely bipolar fuzzy soft sets, by combining bipolar fuzzy sets with soft sets. Motivated by these concerns, in this paper, we present four ways to reduce parameters in bipolar fuzzy soft sets by developing another bipolar fuzzy soft set theoretical approach to solve decision-making problems. In particular, we solve the decision-making problem in [30] by our proposed decision-making algorithm-based bipolar fuzzy soft sets. We propose an algorithm of each reduction technique. Furthermore, we compare these reduction methods and discuss their pros and cons in detail. We also present a real-life application to show the validity of our proposed reduction algorithms. For other terminologies not mentioned in the paper, the readers are referred to [31,32,33,34,35,36,37,38,39].
The rest of this paper is structured as follows. Section 2 introduces the basic definitions and develops a new technique for decision-making-based bipolar fuzzy soft sets. Section 3 defines four kinds of parameter reductions of bipolar fuzzy soft sets and presents their reduction algorithms, which are illustrated by corresponding examples. A comparison among the reduction algorithms is presented in Section 4. Section 5 is devoted to solving a real-life decision-making application. In the end, the conclusions of this paper are provided in Section 6. Throughout this paper, the following notations given in Table 1 will be used.

2. Another Bipolar Fuzzy Soft Sets Approach to Decision-Making Problems

Saleem et al. [30] presented an efficient approach to solve practical decision-making problems based on bipolar fuzzy soft sets. In this section, we first review the definitions of bipolar fuzzy sets and bipolar fuzzy soft sets, and then, we introduce a novel approach based on bipolar fuzzy soft sets, which can effectively solve decision-making problems, followed by an algorithm. Moreover, we use our proposed algorithm to solve the decision-making application presented by Saleem et al. [30] and observe that the optimal decisions obtained by both methods are the same.
Definition 1.
[29,40] Let O be a nonempty universe of objects. A bipolar fuzzy set B in O is defined as:
B = { ( o , λ + ( o ) , λ ( o ) ) | o O } ,
where λ + : O [ 0 , 1 ] and λ : O [ 1 , 0 ] are mappings. The positive membership degree λ + ( o ) denotes the satisfaction degree of an object o for the property corresponding to a bipolar fuzzy set B, and the negative membership degree λ ( o ) denotes the satisfaction degree of an object o for some implicit counter-property corresponding to a bipolar fuzzy set B.
Definition 2.
[30] Let O be a nonempty universe of objects and R a universe of parameters related to objects in O. A pair ( G , R ) is called a BFSS over universe O, where G is a mapping from R into B F O . It is defined as follows:
( G , R ) = { ( o , λ r + ( o ) , λ r ( o ) ) | o O , r R } .
Assume that O = { o 1 , o 2 , , o n } is a universe of objects and  R = { r 1 , r 2 , , r m } is a universe of parameters related to objects in O. Then, a BFSS ( G , R ) can also be presented by tabular arrangement, as shown in Table 2.
Definition 3.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , λ r j + ( o i ) and λ r j ( o i ) are the membership and non-membership degrees of each element o i to [ 0 , 1 ] and [ 1 , 0 ] , respectively. We define the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for each r j , ( j = 1 , 2 , , m ) as:
b r j + ( o i ) = s = 1 n ( λ r j + ( o i ) λ r j + ( o s ) ) ,
b r j ( o i ) = s = 1 n ( λ r j ( o i ) λ r j ( o s ) ) .
Definition 4.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , the score of the membership degrees for r j ( b r j ( o i ) ) is given by:
b r j ( o i ) = b r j + ( o i ) + b r j ( o i ) ,
where b r j + ( o i ) and b r j ( o i ) are the scores of positive and negative membership degrees for each r j , respectively.
Definition 5.
Let O = { o 1 , o 2 , , o n } be a universe of objects, and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , the final score for each object o i denoted by S i is defined as follows:
S i = r j R b r j ( o i ) .
We present a new decision-making technique based on BFSSs as follows:
Example 1.
Reconsider Example 8 in [30]. Let O = { o 1 , o 2 , o 3 , o 4 } be the set of four cars, R = { r 1 = costly , r 2 = beautiful , r 3 = fuel efficient , r 4 = modern technology , r 5 = luxurious } a collection of parameters, and Q = { r 1 , r 2 , r 5 } R . Then, a BFSS ( G , Q ) is given by Table 3. We proceed to applying Algorithm 1 to ( G , Q ) .
By using (2) and (3), the scores of positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , 3 , 4 and j = 1 , 2 , 5 are given by:
b r 1 + ( o 1 ) = s = 1 n ( λ r 1 + ( o 1 ) λ r 1 + ( o s ) ) ,
= ( λ r 1 + ( o 1 ) λ r 1 + ( o 1 ) ) + ( λ r 1 + ( o 1 ) λ r 1 + ( o 2 ) ) + ( λ r 1 + ( o 1 ) λ r 1 + ( o 3 ) ) + ( λ r 1 + ( o 1 ) λ r 1 + ( o 4 ) ) , = ( 0.4 0.4 ) + ( 0.4 0.6 ) + ( 0.4 0.8 ) + ( 0.4 0.5 ) , = 0.7 .
Similarly, the remaining values are given by Table 4 and Table 5.
Now, by using Definition 4, the tabular arrangement for the score of the membership degrees of BFSS ( G , Q ) is given by Table 6.
By Definition 5, the final score of each car o i is given by Table 7. By way of illustration,
S 1 = r j R b r j ( o 1 ) = b r 1 ( o 1 ) + b r 2 ( o 1 ) + b r 5 ( o 1 ) = 1.6 + ( 0.5 ) + 1.6 = 0.5 .
Clearly, S 3 = 0.7 is the maximum score for the object o 3 . Thus, o 3 is the decision object, which coincides with the decision obtained in [30].
Algorithm 1 Selection of an object based on BFSSs.
  • Input
    O, a universal set having n objects.
    R, a universe of parameters with m elements.
    ( G , R ) , a BFSS, which is given by Definition 2.
  • Find the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees, where i = 1 , 2 , , n and j = 1 , 2 , , m .
  • Calculate the score of the membership degrees b r j ( o i ) where i = 1 , 2 , , n and j = 1 , 2 , , m , by Definition 4.
  • Evaluate the final score S i for each object o i , i = 1 , 2 , , n by Definition 5.
  • Compute all indices l for which S l = max i = 1 , 2 , , n S i .
  • Output
    The decision will be any o l corresponding to the list of indices obtained in Step 5.
Example 2.
Let O = { o 1 , o 2 , o 3 , o 4 , o 5 } be a collection of five objects under consideration and R = { r 1 , r 2 , r 3 , r 4 } a collection of parameters related to the objects in O. Then, a BFSS ( G , R ) is given by Table 8.
By using (2) and (3), the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , 5 and j = 1 , 2 , 3 , 4 are given by Table 9 and Table 10, respectively.
Now, by using Definition 4, the score of membership degrees b r j ( o i ) for i = 1 , 2 , , 5 and j = 1 , 2 , 3 , 4 of BFSS ( G , R ) is given by Table 11.
Using ( 5 ) , the final score of each object o i is given by Table 12.
Clearly, S 4 = 5.6 is the maximum score for the object o 4 , which coincides with the decision object obtained using the decision-making algorithm in [30].
From the above analysis, it can be easily perceived that our proposed decision-making approach based on BFSSs is efficient and reliable. However, in a realistic perspective, it contains redundant parameters for decision-making. To overcome this issue, the parameter reduction of BFSS is proposed. A parameter reduction is a technique in which the set of parameters is reduced to obtain a minimal subset that gives the same decision as the whole set.

3. Four Types of Parameter Reductions of BFSSs

1. OCB-PR:
We first define OCB-PR and then provide an algorithmic approach to obtain it, which is illustrated via an example.
Definition 6.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , denote M R as the family of objects in O, which takes the maximum value of S i . For each B R , if M R B = M R , then B is said to be dispensable in R, else B is called indispensable in the set R. The parameter set R is called independent if every B R is indispensable in R, else R is dependent. A subset P of R is said to be an OCB-PR of R if the following axioms hold.
1. 
P is independent (that is, P is the smallest subset of R that keeps the optimal decision object invariant).
2. 
M P = M R .
Based on Definition 6, we propose an OCB-PR algorithm that deletes redundant parameters while keeping the optimal decision object unchanged.
Example 3.
Let O = { o 1 , o 2 , o 3 , o 4 } be the set of four cars and R = { r 1 = costly , r 2 = beautiful , r 3 = fuel efficient , r 4 = modern technology , r 5 = luxurious } a collection of parameters. Reconsider a BFSS ( G , Q ) as in Example 8 [30], where Q = { r 1 , r 2 , r 5 } R . We proceed to applying Algorithm 2 to the BFSS ( G , Q ) .
From Table 7, we compute that for B = { r 2 , r 5 } , we obtain M Q B = M B . Hence, { r 1 } is an OCB-PR of BFSS ( G , Q ) given by Table 13.
From Table 13, it can be easily observed that o 3 is the optimal decision object after reduction. Clearly, the subset { r 1 } Q is minimal, which keeps the optimal decision object unchanged.
Algorithm 2 OCB-PR.
  • Input
    O, a universal set having n objects.
    R, a universe of parameters with m elements.
    ( G , R ) , a BFSS, which is given by Definition 2.
  • Calculate the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , n and j = 1 , 2 , , m .
  • Calculate the score of membership degrees b r j ( o i ) for i = 1 , 2 , , n and j = 1 , 2 , , m by using ( 4 ) .
  • Evaluate the final score S i for each object o i , i = 1 , 2 , , n by Definition 5.
  • Compute all B = { r 1 , r 2 , , r p } R that satisfy the following condition:
    M R B = M R .
  • Output
    The set R B is referred as an OCB-PR of BFSS ( G , R ) , if there does not exist such B R that they satisfy (6), then there is no OCB-PR of BFSS ( G , R ) .
2. IRDCB-PR:
There are several real situations in which our main task is to compute the rank of optimal and suboptimal choices. The suboptimal choices are not considered by the OCB-PR method because OCB-PR only studies the optimal choice. To overcome this drawback, we define IRDCB-PR and present an algorithmic approach that keeps the rank of optimal and suboptimal choices unchanged after deleting the irrelevant parameters.
Definition 7.
Let O = { o 1 , o 2 , , o n } be a universe of objects, R = { r 1 , r 2 , , r m } a universe of parameters, and P R . For a BFSS ( G , R ) , an indiscernibility relation is given by:
I N D ( P ) = { ( o i , o j ) O × O | S P ( o i ) = S P ( o j ) } ,
where S P ( o i ) = r s P b r s ( o i ) . For an arbitrary BFSS ( G , R ) over O = { o 1 , o 2 , , o n } , the decision partition is given by:
D R = { { o 1 , o 2 , , o i } S 1 , { o i + 1 , , o j } S 2 , , { o k , , o n } S z } ,
where for each subclass { o u , o u + 1 , , o u + v } S i , S R ( o u ) = S R ( o u + 1 ) = = S R ( o u + v ) = S i , and S 1 S 2 S z , that is, there are z subclasses. Actually, objects are ranked with respect to the score value of S i , where i = 1 , 2 , , m .
Definition 8.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O, and let ( G , R ) be a BFSS. For each B R , if  D R B = D R , then B is said to be dispensable in R, else B is referred to as an indispensable set in the set R. The parameter set R is said to be independent if each B R is indispensable in R, else R is dependent. A subset P of R is said to be an IRDCB-PR of R if the following axioms hold.
1. 
P is independent (that is, P is the minimal subset of R that keeps the rank of optimal and suboptimal decision choices unchanged).
2. 
D P = D R .
Based on Definition 8, we propose an IRDCB-PR algorithm (see Algorithm 3) that deletes irrelevant parameters while keeping the rank of optimal and suboptimal decision choice objects unchanged.
Algorithm 3 IRDCB-PR.
  • Input
    O, a universal set having n objects.
    R, a universe of parameters with m elements.
    ( G , R ) , a BFSS, which is given by Definition 2.
  • Calculate the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , n and j = 1 , 2 , , m .
  • Calculate the score of membership degrees b r j ( o i ) for i = 1 , 2 , , n and j = 1 , 2 , , m by using  ( 4 ) .
  • Evaluate the final score S i for each object o i , i = 1 , 2 , , n by using ( 5 ) .
  • Compute all B = { r 1 , r 2 , , r p } R that satisfy the following condition:
    D R B = D R .
  • Output
    The set R B is referred to as an IRDCB-PR of BFSS ( G , R ) , if there does not exist B R that satisfy (7), then there is no IRDCB-PR of BFSS ( G , R ) .
Example 4.
Let O = { o 1 , o 2 , o 3 , o 4 } be a universal set of four objects and R = { r 1 , r 2 , r 3 } a set of parameters related to the objects in O. Then, a BFSS ( G , R ) is given by Table 14.
By using (2) and (3), the scores of positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , 3 , 4 and j = 1 , 2 , 3 are given by Table 15 and Table 16, respectively.
Now, by using Definition 4, the tabular arrangement for the score of membership degrees b r j ( o i ) where i = 1 , 2 , 3 , 4 and j = 1 , 2 , 3 of ( G , R ) is given by Table 17.
From ( 5 ) , the final score of each object o i , ( i = 1 , 2 , 3 , 4 ) is given by Table 18.
Clearly, S 4 = 3.7 is the maximum score for the object o 4 . Thus, o 4 is the optimal decision object, which coincides with the decision obtained using the algorithm in [30]. From Table 16, it can readily be computed:
D R = { { o 4 } 3.7 , { o 3 } 3.3 , { o 2 } 0.3 , { o 1 } 5 . 5 } .
Using Algorithm 3, we can proceed further by examining the subsets of R. Thus, for  B = { r 1 , r 2 } , we have D R B = { { o 4 } 1.5 , { o 3 } 0.7 , { o 2 } 0.3 , { o 1 } 2.5 } with D R B = D R . Note that after reduction, the rank and partition of objects are not changed. Hence, { r 3 } (not all) is the IRDCB-PR of BFSS ( G , R ) given by Table 19.
Clearly, { r 3 } R is minimal that keeps the rank of decision choices unchanged.
3. N-PR:
The parameter reduction techniques such as OCB-PR and IRDCB-PR are not always workable in many practical applications. Therefore, we provide the normal parameter reduction of BFSSs, which studies the issues of added parameters and suboptimal choice. We present a definition of N-PR and provide an algorithmic method to obtain it, which are illustrated via an example.
Definition 9.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , B is called dispensable if there exist B = { r 1 , r 2 , , r p } R that satisfy the following expression.
r j B b r j ( o 1 ) = r j B b r j ( o 2 ) = = r j B b r j ( o n ) .
Else, B is indispensable. A subset P R is called N-PR of R, if the following axioms hold.
1. 
P is indispensable.
2. 
r j R P b r j ( o 1 ) = r j R P b r j ( o 2 ) = = r j R P b r j ( o n ) .
Based on Definition 9, we propose the N-PR algorithm (Algorithm 4) as follows.
Algorithm 4 N-PR.
  • Input
    O, a universal set having n objects.
    R, a universe of parameters with m elements.
    ( G , R ) , a BFSS, which is given by Definition 2.
  • Calculate the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , n and j = 1 , 2 , , m .
  • Calculate the score of membership degrees b r j ( o i ) for i = 1 , 2 , , n and j = 1 , 2 , , m by Definition 4.
  • Evaluate the final score S i for each object o i , i = 1 , 2 , , n by Definition 5.
  • For each B R with the cardinality | R | 1 , we check if it verifies the following condition:
    r j B b r j ( o 1 ) = r j B b r j ( o 2 ) = = r j B b r j ( o n ) .
  • Output
    If any of these subsets verifies the condition (8), then we select any of their complements in R as an optimal N-PR. Otherwise, for each B R with cardinality | R | 2 , we check if it verifies the condition (8), then we select R B as an optimal N-PR, and so on; if there does not exist such B R that satisfy (8), then there is no N-PR of BFSS ( G , R ) .
Example 5.
Let O = { o 1 , o 2 , o 3 , o 4 , o 5 , o 6 } be the set of six objects and R = { r 1 , r 2 , r 3 , r 4 , r 5 } a set of parameters. Then, a BFSS ( G , R ) is defined by Table 20.
By using (2) and (3), the scores of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , 6 and j = 1 , 2 , , 5 are described by Table 21 and Table 22, respectively.
Now, by using Definition 9, the tabular arrangement for the score of membership degrees b r j ( o i ) where i = 1 , 2 , , 6 and j = 1 , 2 , , 5 of ( G , R ) is given by Table 23.
From ( 5 ) , the final score of each object o i is given by Table 24.
Clearly, S 6 = 6.2 is the maximum score for the object o 6 . Thus, o 6 is the optimum decision object. From Table 24, it can be easily observed that B = { r 3 , r 5 } , satisfying:
r j B b r j ( o 1 ) = r j B b r j ( o 2 ) = = r j B b r j ( o 6 ) = 0 .
Thus, { r 1 , r 2 , r 4 } (not all) is the N-PR of BFSS ( G , R ) given by Table 25.
Clearly, N-PR method maintains the invariable rank of decision choices, as well as takes into account immutable differences between the decision choice objects. Thus, if we add new parameters in the set of parameters, there is no need to compute new reduction again. The issue of added parameters is discussed by examples in Section 4.
4. AN-PR:
N-PR is an outstanding technique for the reduction of parameters. It is very difficult to compute N-PR taking into account that BFSS provides bipolar information to explain membership degrees. To improve this method, we propose a new reduction method, namely AN-PR, which is a compromise between IRDCB-PR and N-PR.
Definition 10.
Let O = { o 1 , o 2 , , o n } be a universe of objects and R = { r 1 , r 2 , , r m } a universal set of parameters associated with objects in O. For a BFSS ( G , R ) , given an arbitrary error value α, if there exists B = { r 1 , r 2 , , r p } R such that:
r j B b r j ( o 1 ) r j B b r j ( o 2 ) r j B b r j ( o n ) ,
inside the range of α and D R B = D R , then B is dispensable, else B is indispensable. The subset P R is called an AN-PR of BFSS ( G , R ) , when the following three axioms hold.
1. 
P is indispensable.
2. 
r j R P b r j ( o 1 ) r j R P b r j ( o 2 ) r j R P b r j ( o n ) inside the range of α.
3. 
D P = D R .
We are ready to propose the AN-PR algorithm (Algorithm 5 below):
Algorithm 5 AN-PR.
  • Input
    O, a universal set having n objects.
    R, a universe of parameters with m elements.
    α , an error value.
    ( G , R ) , a BFSS, which is given by Definition 2.
  • Calculate the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , n and j = 1 , 2 , , m .
  • Calculate the score of the membership degrees b r j ( o i ) for i = 1 , 2 , , n and j = 1 , 2 , , m by Definition 4.
  • Evaluate the final score S i for each object o i , i = 1 , 2 , , n by Definition 5.
  • For each B R with the cardinality | R | 1 , we check if it verifies the following expressions:
    r j B b r j ( o 1 ) r j B b r j ( o 2 ) r j B b r j ( o n ) ,
    inside range of α and:
    D R B = D R .
  • Output
    If any of these subsets verifies the conditions (9) and (10), then we select any of their complements in R as an optimal AN-PR. Otherwise, for each B R with cardinality | R | 2 , we check if it verifies the conditions (9) and (10), then we select any of their complements in R as an optimal AN-PR, and so on; if there does not exist such B R that satisfy Conditions (9) and (10), then there is no AN-PR of BFSS ( G , R ) .
As mentioned earlier, AN-PR is a compromise between IRDCB-PR and N-PR. Note that if there is no limitation from α (that is, without α ), AN-PR is IRDCB-PR, and when α = 0 , AN-PR is N-PR. It can be easily observed that the reduction set by AN-PR relies on the outcomes of IRDCB-PR and the provided range α , because the AN-PR algorithm is relying on IRDCB-PR. In other words, reduction sets through AN-PR are computed based on the reduction sets through IRDCB-PR. If the computing difference between the highest and lowest sum of scores of reduced parameters is lower than α , the set of reduction is referred to as parameter reduction through AN-PR, else it is not referred to as the parameter reduction through AN-PR. Note that the AN-PR method preserves the rank of decision choices.
Example 6.
Let O = { o 1 , o 2 , o 3 , o 4 , o 5 , o 6 } be a universal set of six objects and R = { r 1 , r 2 , r 3 , r 4 , r 5 } a set of parameters Then, a BFSS ( G , R ) is defined by Table 26.
By using (2) and (3), the scores of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , 6 and j = 1 , 2 , , 5 are given by Table 27 and Table 28, respectively.
Now, by using Definition 4, the tabular arrangement for the score of membership degrees b r j ( o i ) where i = 1 , 2 , , 6 and j = 1 , 2 , , 5 of BFSS ( G , R ) is given by Table 29.
From ( 5 ) , the final score of each object o i is given by Table 30.
Clearly, S 5 = 3.0 is the maximum score for the object o 5 . Thus, o 5 is an optimal decision object. Given an error value α = 0.9 , using Table 30, we can easily compute that B = { r 2 , r 5 } , r j B b r j ( o 1 ) = 0.1 , r j B b r j ( o 2 ) = 0.7 , r j B b r j ( o 3 ) = 0.1 , r j B b r j ( o 4 ) = 0.1 , r j B b r j ( o 5 ) = 0.1 , r j B b r j ( o 6 ) = 0.1 , satisfying:
r j B b r j ( o 1 ) r j B b r j ( o 2 ) r j B b r j ( o 6 ) .
From Table 30, D R = { { o 5 } 3.0 , { o 4 } 0.6 , { o 2 } 0.2 , { o 3 } 1.2 , { o 6 } 1.6 , { o 1 } 2.0 } . Also, D R B = { { o 5 } 3.1 , { o 4 } 0.7 , { o 2 } 0.9 , { o 3 } 1.1 , { o 6 } 1.5 , { o 1 } 1.9 } , satisfying D R B = D R . Hence, { r 1 , r 3 , r 4 } is an AN-PR of BFSS ( G , R ) given in Table 31.

4. Comparison

This section compares our proposed parameter reduction algorithms regarding the EDCR, applicability, exact degree of reduction, reduction result, multi-use of the reduction set, and applied situation.
1. Comparison of EDCR and applicability:
Assume that a coefficient q represents the ratio of correctly-computed parameter reduction in different datasets. In other words, q represents the applicability of our proposed reduction techniques in practical applications and will be interpreted as EDCR. OCB-PR only preserves the optimal decision object. Therefore, a parameter reduction is easy to compute with OCB-PR. For example, { r 1 } is the OCB-PR in Example 3; { r 1 } and { r 3 } are the OCB-PR in Example 4; { r 1 } , { r 2 } , { r 3 } and { r 4 } are the OCB-PR in Example 5; { r 4 } is the OCB-PR in Example 6. Hence, q = 100 % .
IRDCB-PR is designed to delete the irrelevant parameters by preserving the partitioning and rank of objects. Obviously, parameter reduction using IRDCB-PR is more difficult than OCB-PR. For instance, { r 3 } is the IRDCB-PR in Example 4, and { r 1 , r 2 , r 4 } and { r 2 , r 3 , r 4 } are the IRDCB-PR in Example 5. We can observe that there is no IRDCB-PR in Examples 3 and 6. Thus, q = 2 / 4 = 50 % .
N-PR maintains both invariable rank and unchangeable differences between decision choices. Using the N-PR algorithm is the most difficult to obtain parameter reduction as compared to other proposed reduction methods. We can see that { r 1 , r 2 , r 4 } and { r 2 , r 3 , r 4 } are the N-PRs in Example 5. Unfortunately, there is no N-PR in Examples 3, 4, and 6. Thus, q = 1 / 3 = 33.3 % .
AN-PR is a compromise between IRDCB-PR and N-PR. Without α , AN-PR is IRDCB-PR, and when α = 0 , AN-PR is N-PR. Thus, the EDCR of AN-PR depends on α . Therefore, q = 1 / 3 = 33.3 % .
2. Comparison of the exact degree of reduction and reduction results:
The exact degree of parameter reduction considers the precision of parameter reduction and its impact on the post-reduction decision object. OCB-PR only keeps the optimal decision object unchanged after reduction (that is, the rank of decision choices may be changed after reduction). Therefore, the exact degree of reduction is lower. IRDCB-PR reduces redundant parameters by preserving the partitioning and rank of objects. Therefore, the exact degree of reduction is higher as compared to OCB-PR. N-PR preserves both rank and unchangeable differences between decision choices. Therefore, its exact degree of reduction is highest.
3. Comparison of the multiple use of the parameter reduction and applied situation:
The multiple use of parameter reduction means that the reduction sets can be reused when the expert demands suboptimal parameters and when he/she adds some new parameters.
(i) Comparison of the multiple use of the parameter reduction and applied situation of OCB-PR:
OCB-PR usually has a wider range of applications. As we know, it only provides the optimal option. After selecting the best choice, if the data of the optimal object are deleted from the dataset, then, for the next decision, we need to make a new reduction again, which wastes much time on the parameter reduction. Furthermore, the added parameter set has not been considered. If new parameters are added to the parameter set, a new reduction is required. We explain these issues by the following example.
Example 7.
From Example 2, clearly, o 4 = 5.6 is the best option in Table 12. An OCB-PR of ( G , R ) is { r 2 } , which is given by Table 32. When the object o 4 is deleted from Table 12, the suboptimal choice object is o 1 . From Table 32, it can be easily observed that the suboptimal choice is o 3 . It is clear that the suboptimal choice has changed.
Let A = { a 1 , a 2 } be the set of added parameters for the BFSS ( G , R ) in Example 2, given by:
G ( a 1 ) = ( o 1 , 0.8 , 0.6 ) , ( o 2 , 0.5 , 0.7 ) , ( o 3 , 0.2 , 0.8 ) , ( o 4 , 0.5 , 0.6 ) , ( o 5 , 0.4 , 0.4 ) , G ( a 2 ) = ( o 1 , 0.8 , 0.8 ) , ( o 2 , 0.5 , 0.6 ) , ( o 3 , 0.4 , 0.6 ) , ( o 4 , 0.3 , 0.7 ) , ( o 5 , 0.8 , 0.6 ) .
For the parameters a 1 and a 2 , the score of membership degrees b a j ( o i ) where i = 1 , 2 , 3 , 4.5 and j = 1 , 2 of BFSS ( G , R ) is given by Table 33.
By combining Table 12 and Table 33, we can observe that o 1 is the optimal decision object from Table 34, while by combining Table 32 and Table 33, o 4 is the best option from Table 35. Clearly, these two optimal options are different. Thus, OCB-PR has a lower degree of the multiple use of parameter reduction.
(ii) Comparison of multi-use of parameter reduction and applied situation of IRDCB-PR:
IRDCB-PR maintains the rank of suboptimal decision choices. However, the issue of added parameters is not solved by the IRDCB-PR method. We give the following example to explain this idea.
Example 8.
Let A = { a 1 , a 2 } be the set of added parameters for the BFSS ( G , R ) in Example 4, given by:
G ( a 1 ) = ( o 1 , 0.5 , 0.5 ) , ( o 2 , 0.5 , 0.6 ) , ( o 3 , 0.4 , 0.8 ) , ( o 4 , 0.5 , 0.8 ) , G ( a 2 ) = ( o 1 , 0.7 , 0.0 ) , ( o 2 , 0.5 , 0.3 ) , ( o 3 , 0.6 , 0.3 ) , ( o 4 , 0.4 , 0.4 ) .
For the parameters a 1 and a 2 , the score of membership degrees is given by Table 36.
Combine Table 18 and Table 19 with Table 36. From Table 37, we see that D R + { a 1 , a 2 } = { { o 3 } 2.5 , { o 4 } 2.1 , { o 2 } 0.3 , { o 1 } 3.1 } . Similarly, using Table 38, we get D { r 3 } + { a 1 , a 2 } = { { o 2 } 0.3 , { o 1 , o 3 , o 4 } 0.1 } . Clearly, the ranks of choice objects in Table 37 and Table 38 are different. From Table 18, we observe that D R = { { o 4 } 3.7 , { o 3 } 3.3 , { o 2 } 0.3 , { o 1 } 5.5 } . IRDCB-PR of the BFSS ( G , R ) is { r 3 } . We can compute that D { r 3 } = { { o 4 } 1.5 , { o 3 } 0.7 , { o 2 } 0.3 , { o 1 } 2.5 } . Thus, IRDCB-PR preserves the partition and rank of the objects after parameter reduction. From the above analysis, we observe that the issue of suboptimal choice can be solved by the IRDCB-PR method, while the issue of added parameters cannot be solved by the IRDCB-PR technique.
(iii) Comparison of the multiple use of parameter reduction and applied situation of N-PR:
The problems of the suboptimal choice and rank of decision choice objects can be solved using N-PR. The following example addresses this issue.
Example 9.
Let { r 1 , r 2 } be the set of added parameters for the BFSS ( G , R ) in Example 5, which are given by:
G ( r 1 ) = ( o 1 , 0.7 , 0.4 ) , ( o 2 , 0.7 , 0.6 ) , ( o 3 , 0.7 , 0.8 ) , ( o 4 , 0.4 , 0.5 ) , ( o 5 , 0.4 , 0.4 ) , ( o 6 , 0.5 , 0.8 ) , G ( r 2 ) = ( o 1 , 0.5 , 0.4 ) , ( o 2 , 0.6 , 0.3 ) , ( o 3 , 0.6 , 0.2 ) , ( o 4 , 0.7 , 0 ) , ( o 5 , 0.4 , 0.9 ) , ( o 6 , 0.5 , 0.8 ) .
For the parameters r 1 and r 2 , the score of membership degrees is given by Table 39.
Combine Table 24 (final score table of BFSS ( G , R ) ) and Table 25 (N-PR for the BFSS ( G , R ) ) with Table 39 (added parameters’ score table). From Table 40 and Table 41, we can easily compute that D R + { r 1 , r 2 } = { { o 3 } 5.2 , { o 6 } 2.0 , { o 1 } 1.0 , { o 4 } 0.2 , { o 5 } 2.2 , { o 2 } 4.0 } and D { r 1 , r 2 , r 4 } + { r 1 , r 2 } = { { o 3 } 5.2 , { o 6 } 2.0 , { o 1 } 1.0 , { o 4 } 0.2 , { o 5 } 2.2 , { o 2 } 4.0 } , respectively. Hence, the ranks of decision choices are the same. Thus, N-PR has the highest degree of the multiple use of reduction sets.
(iv) Comparison of the multiple use of parameter reduction and applied situation of AN-PR
No doubt, N-PR is a suitable approach for parameter reduction, but it is very hard to compute the N-PR because BFSS provides bipolar information to describe membership degrees. To reduce this computational difficulty, AN-PR is given as a compromise between IRDCB-PR and N-PR.
Example 10.
By combining Table 30 (final score table for the BFSS ( G , R ) in Example 6) and Table 31 (AN-PR of BFSS ( G , R ) ) with Table 39 (added parameters’ score table), we get Table 42 and Table 43, respectively.
From Table 42 and Table 43, we find that D R + { r 1 , r 2 } = { { o 4 } 3.6 , { o 2 } 1.6 , { o 3 } 0 , { o 1 } 0.2 , { o 5 } 0.6 , { o 6 } 5.7 } and D { r 1 , r 2 , r 4 } + { r 1 , r 2 } = { { o 4 } 3.7 , { o 2 } 0.9 , { o 3 } 0.1 , { o 1 } 0.1 , { o 5 } 0.5 , { o 6 } 5.7 } , respectively. Hence, the ranks of decision choices are the same, but there is a little difference among decision choices within the range of α = 0.9 . Thus, AN-PR has the highest degree of the multiple use of reduction sets.

5. Application

To demonstrate our proposed techniques, they were applied to a practical application.
Let O = { o 1 , o 2 , , o 12 } be a set of twelve investment avenues, where:
o 1 ’ represents “Bank Deposits”,
o 2 ’ represents “Insurance”,
o 3 ’ represents “Foreign or Overseas Mutual Fund”,
o 4 ’ represents “Bonds Offered by the Government and Corporates”,
o 5 ’ represents “Equity Mutual Funds”,
o 6 ’ represents “Precious Objects”,
o 7 ’ represents “Postal Savings”,
o 8 ’ represents “Shares and Stocks”,
o 9 ’ represents “Employee Provident Fund”,
o 10 ’ represents “Company Deposits”,
o 11 ’ represents “Real Estate”,
o 12 ’ represents “Money Market Instruments”,
and R = { r 1 , r 2 , , r 10 } be a collection of parameters associated with the objects in O ( r i ’s are basically factors influencing investment decision), where:
r 1 ’ denotes “Safety of Funds”,
r 2 ’ denotes “Liquidity of Funds”,
r 3 ’ denotes “State Policy”,
r 4 ’ denotes “Maximum Profit in Minimum Period”,
r 5 ’ denotes “Stable Return”,
r 6 ’ denotes “Easy Accessibility”,
r 7 ’ denotes “Tax Concession”,
r 8 ’ denotes “Minimum Risk of Possession”,
r 9 ’ denotes “Political Climate”,
r 10 ’ denotes “Level of Income”.
An investor Z wants to invest in a most suitable investment avenue from the above-mentioned investment avenues. The information between the investment avenues and influenced factors is given in the form of a BFSS ( G , R ) , which is given by Table 44.
By using (2) and (3), the score of the positive b r j + ( o i ) and negative b r j ( o i ) membership degrees for i = 1 , 2 , , 12 and j = 1 , 2 , , 10 are described by Table 45 and Table 46, respectively.
Now, by using Definition 9, the tabular arrangement for the score of membership degrees b r j ( o i ) where i = 1 , 2 , , 12 and j = 1 , 2 , , 10 of BFSS ( G , R ) is given by Table 47.
From ( 5 ) , the final score of each object o i , i = 1 , 2 , , 12 is given by Table 48.
Clearly, S 11 = 25.9 is the maximum score for the object o 11 . Thus, the investment avenue, o 11 , namely real estate, is the best choice for the investor Z. Our proposed reduction algorithms were executed by the investment avenue dataset. Consequently, the parameter reduction sets were readily computed by OCB-PR, and the minimal reduction was r 1 (not all) that kept the optimal decision invariant. Regrettably, we obtained no parameter reduction through IRDCB-PR, AN-PR, and N-PR. This means that OCB-PR can be applied in many real-life decision-making situations as compared to IRDCB-PR, AN-PR, and N-PR.

6. Conclusions

Parameter reduction is one of the main issues in soft set modelization and its hybrid models, including fuzzy soft set theory. Parameter reduction preserves the decision by removing the irrelevant parameters. In this paper, a novel approach for decision-making based on BFSSs was introduced, and some decision-making problems were solved by this newly-proposed approach to prove its validity, including a decision-making problem presented in [30]. It was also observed that the results were the same by applying this novel decision-making approach. Using this concept, four novel definitions of parameter reductions, namely, OCB-PR, IRDCB-PR, N-PR, and AN-PR, of BFSSs were presented and illustrated through examples. Due to the existence of bipolar information in many real-world problems, the newly-proposed decision-making method based on BFSSs and parameter reductions of BFSSs were very efficient approaches to solve such problems, when compared to some existing methods, including fuzzy soft sets [32] and their parameter reduction [33]. An algorithm for each parameter reduction approach was developed. Moreover, our proposed reduction methods were compared with respect to the theoretical and experimental points of view as displayed in Table 49. Finally, an application was studied to show the feasibility of our proposed reduction algorithms. In the future, we expect to extend our research work to (1) parameter reduction of the Pythagorean fuzzy soft sets, (2) parameter reduction of the Pythagorean fuzzy bipolar soft sets, and (3) parameter reduction of m-polar fuzzy soft sets.

Author Contributions

G.A., M.A., A.N.A.K., and J.C.R.A. conceived of the presented concept. G.A. and M.A. developed the theory and performed the computations. A.N.A.K. and J.C.R.A. verified the analytical methods.

Funding

This research received no external funding.

Acknowledgments

The authors are grateful to the Editor of the Journal and the anonymous referees for their valuable comments.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  2. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  3. Molodtsov, D. Soft set theory: First results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef]
  4. Molodtsov, D. The Theory of Soft Sets; URSS Publishers: Moscow, Russia, 2004. (In Russian) [Google Scholar]
  5. Maji, P.K.; Biswas, R.; Roy, A.R. Soft set theory. Comput. Math. Appl. 2003, 45, 555–562. [Google Scholar] [CrossRef] [Green Version]
  6. Feng, F.; Li, C.; Davvaz, B.; Ali, M.I. Soft sets combined with fuzzy sets and rough sets: A tentative approach. In Soft Computing A Fusion of Foundations, Methodologies and Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 899–911. [Google Scholar]
  7. Roy, A.R.; Maji, P.K. A fuzzy soft set theoretic approach to decision-making problems. J. Comput. Appl. Math. 2007, 203, 412–418. [Google Scholar] [CrossRef]
  8. Xiao, Z.; Gong, K.; Zou, Y. A combined forecasting approach based on fuzzy soft sets. J. Comput. Appl. Math. 2009, 228, 326–333. [Google Scholar] [CrossRef]
  9. Ali, M.I. Another view on reduction of parameters in soft sets. Appl. Soft Comput. 2012, 12, 1814–1821. [Google Scholar] [CrossRef]
  10. Danjuma, S.; Ismail, M.A.; Herawan, T. An alternative approach to normal parameter reduction algorithm for soft set theory. IEEE Access 2017, 5, 4732–4746. [Google Scholar] [CrossRef]
  11. Danjuma, S.; Herawan, T.; Ismail, M.A.; Chiroma, H.; Abubakar, A.I.; Zeki, A.M. A review on soft set-based parameter reduction and decision-making. IEEE Access 2017, 5, 4671–4689. [Google Scholar] [CrossRef]
  12. Deng, T.; Wang, X. Parameter significance and reductions of soft sets. Int. J. Comput. Math. 2012, 89, 1979–1995. [Google Scholar] [CrossRef]
  13. Kong, Z.; Jia, W.; Zhang, G.; Wang, L. Normal parameter reduction in soft set based on particle swarm optimization algorithm. Appl. Math. Model. 2015, 39, 4808–4820. [Google Scholar] [CrossRef]
  14. Zhan, J.; Alcantud, J.C.R. A survey of parameter reduction of soft sets and corresponding algorithms. Artif. Intell. Rev. 2017. [Google Scholar] [CrossRef]
  15. Maji, P.K.; Roy, A.R. An application of soft sets in a decision-making problem. Comput. Math. Appl. 2002, 44, 1077–1083. [Google Scholar] [CrossRef]
  16. Pawlak, Z.; Skowron, A. Rudiments of rough sets. Inf. Sci. 2007, 177, 3–27. [Google Scholar] [CrossRef]
  17. Chen, D.; Tsang, E.C.C.; Yeung, D.S.; Wang, X. The parameterization reduction of soft sets and its applications. Comput. Math. Appl. 2005, 49, 757–763. [Google Scholar] [Green Version]
  18. Kong, Z.; Gao, L.; Wang, L.; Li, S. The normal parameter reduction of soft sets and its algorithm. Comput. Math. Appl. 2008, 56, 3029–3037. [Google Scholar] [CrossRef] [Green Version]
  19. Ma, X.; Sulaiman, N.; Qin, H.; Herawan, T.; Zain, J.M. A new efficient normal parameter reduction algorithm of soft sets. Comput. Math. Appl. 2011, 62, 588–598. [Google Scholar] [CrossRef] [Green Version]
  20. Kong, Z.; Gao, L.Q.; Wang, L.F. Comment on “A fuzzy soft set theoretic approach to decision making problems”. J. Comput. Appl. Math. 2009, 223, 540–542. [Google Scholar] [CrossRef]
  21. Ma, X.; Qin, H.; Sulaiman, N.; Herawan, T.; Abawajy, J. The parameter reduction of the interval-valued fuzzy soft sets and its related algorithms. IEEE Trans. Fuzzy Syst. 2014, 22, 57–71. [Google Scholar] [CrossRef]
  22. Feng, F.; Jun, Y.B.; Liu, X.; Li, L. An adjustable approach to fuzzy soft set based decision-making. J. Comput. Appl. Math. 2010, 234, 10–20. [Google Scholar] [CrossRef]
  23. Feng, F.; Li, Y.; Fotea, V.L. Application of level soft sets in decision-making based on interval-valued fuzzy soft sets. Comput. Math. Appl. 2010, 60, 1756–1767. [Google Scholar] [CrossRef]
  24. Jiang, Y.; Tang, Y.; Chen, Q. An adjustable approach to intuitionistic fuzzy soft sets based decision-making. Appl. Math. Model. 2011, 35, 824–836. [Google Scholar] [CrossRef]
  25. Giorleo, G.; Minutolo, F.M.C.; Sergi, V. Fuzzy logic modeling and control of steel rod quenching after hot rolling. J. Mater. Eng. Perform. 1997, 6, 599–604. [Google Scholar] [CrossRef] [Green Version]
  26. Kahraman, C.; Gulbay, M.; Kabak, O. Applications of fuzzy sets in industrial engineering: A topical classification. In Fuzzy Applications in Industrial Engineering. Studies in Fuzziness and Soft Computing; Kahraman, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 201. [Google Scholar]
  27. Ross, T.J. Fuzzy Logic with Engineering Applications; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2010. [Google Scholar]
  28. Zhang, W.R. YinYang bipolar fuzzy sets. In Proceedings of the IEEE International Conference on Fuzzy Systems Proceedings and the IEEE World Congress on Computational Intelligence (FUZZ-IEEE ’98), Anchorage, AK, USA, 4–9 May 1998; Volume 1, pp. 835–840. [Google Scholar]
  29. Zhang, W.R. Bipolar fuzzy sets and relations: A computational framework for cognitive modeling and multiagent decision analysis. In Proceedings of the First International Joint Conference of The North American Fuzzy Information Processing Society Biannual Conference. The Industrial Fuzzy Control and Intellige, San Antonio, TX, USA, 18–21 December 1994; pp. 305–309. [Google Scholar]
  30. Saleem, A.; Aslam, M.; Ullah, K. Bipolar fuzzy soft sets and its applications in decision-making problem. J. Intell. Fuzzy Syst. 2014, 27, 729–742. [Google Scholar]
  31. Luqman, A.; Akram, M.; Koam, A.N.A. Granulation of hypernetwork models under the q-rung picture fuzzy environment. Mathematics 2019, 7, 496. [Google Scholar] [CrossRef]
  32. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602. [Google Scholar]
  33. Zhang, Z. The parameter reduction of fuzzy soft sets based on soft fuzzy rough sets. Adv. Fuzzy Syst. 2013, 2013, 12. [Google Scholar] [CrossRef]
  34. Adeel, A.; Akram, M.; Ahmad, I.; Nazar, K. Novel m-polar fuzzy linguistic ELECTRE-I method for group decision-making. Symmetry 2019, 11, 471. [Google Scholar] [CrossRef]
  35. Adeel, A.; Akram, M.; Koam, A.N. Multi-criteria decision-making under mHF ELECTRE-I and HmF ELECTRE-I. Energies 2019, 12, 1661. [Google Scholar] [CrossRef]
  36. Adeel, A.; Akram, M.; Koam, A.N. Group decision-making based on m-polar fuzzy linguistic TOPSIS method. Symmetry 2019, 11, 735. [Google Scholar] [CrossRef]
  37. Akram, M.; Ali, G.; Alshehri, N.O. A new multi-attribute decision-making method based on m-polar fuzzy soft rough sets. Symmetry 2017, 9, 271. [Google Scholar] [CrossRef]
  38. Akram, M.; Adeel, A.; Alcantud, J.C.R. Multi-criteria group decision-making using an m-polar hesitant fuzzy TOPSIS approach. Symmetry 2019, 11, 795. [Google Scholar] [CrossRef]
  39. Akram, M.; Habib, A.; Koam, A.N.A. A novel description on edge-regular q-rung picture fuzzy graphs with application. Symmetry 2019, 11, 489. [Google Scholar] [CrossRef]
  40. Lee, K.M. Bipolar-valued fuzzy sets and their basic operations. In Proceedings of the International Conference, Bangkok, Thailand, 16–21 January 2000; pp. 307–317. [Google Scholar]
Table 1. Notations.
Table 1. Notations.
NotationsDescription
OUniverse of objects
RUniverse of parameters
λ + ( o ) Membership degree of an object o
λ ( o ) Non-membership degree of an object o
BFSSsBipolar fuzzy soft sets
b r j + ( o ) Score of the positive membership degrees of an object o
b r j ( o ) Score of the negative membership degrees of an object o
b r j ( o ) Score of the membership degrees of an object o
S i Final score for each object o i
B F O Family of all bipolar fuzzy sets of O
OCB-PROptimal choice-based parameter reduction
IRDCB-PRInvariable rank of decision choices-based parameter reduction
N-PRNormal parameter reduction
AN-PRApproximate normal parameter reduction
EDCREasiness degree of computing reduction
Table 2. Tabular representation of the BFSS ( G , R ) .
Table 2. Tabular representation of the BFSS ( G , R ) .
O / R r 1 r 2 r m
o 1 ( λ r 1 + ( o 1 ) , λ r 1 ( o 1 ) ) ( λ r 2 + ( o 1 ) , λ r 2 ( o 1 ) ) ( λ r m + ( o 1 ) , λ r m ( o 1 ) )
o 2 ( λ r 1 + ( o 2 ) , λ r 1 ( o 2 ) ) ( λ r 2 + ( o 2 ) , λ r 2 ( o 2 ) ) ( λ r m + ( o 2 ) , λ r m ( o 2 ) )
o n ( λ r 1 + ( o n ) , λ r 1 ( o n ) ) ( λ r 2 + ( o n ) , λ r 2 ( o n ) ) ( λ r m + ( o n ) , λ r m ( o n ) )
Table 3. Tabular representation of BFSS ( G , Q ) .
Table 3. Tabular representation of BFSS ( G , Q ) .
O / Q r 1 r 2 r 5
o 1 ( 0.4 , 0.5 ) ( 0.5 , 0.5 ) ( 0.7 , 0 )
o 2 ( 0.6 , 0.3 ) ( 0.3 , 0.1 ) ( 0.5 , 0.3 )
o 3 ( 0.8 , 0.2 ) ( 0.4 , 0.4 ) ( 0.6 , 0.3 )
o 4 ( 0.5 , 0.2 ) ( 0.7 , 0.3 ) ( 0.4 , 0.4 )
Table 4. Score of the positive membership degrees of ( G , Q ) .
Table 4. Score of the positive membership degrees of ( G , Q ) .
+ r 1 r 2 r 5
o 1 0.7 0.2 0.6
o 2 0.1 0.7 0.2
o 3 0.9 0.3 0.2
o 4 0.3 0.9 0.6
Table 5. Score of the negative membership degrees of ( G , Q ) .
Table 5. Score of the negative membership degrees of ( G , Q ) .
r 1 r 2 r 5
o 1 0.9 0.7 1
o 2 0 0.9 0.2
o 3 0.4 0.3 0.2
o 4 0.4 0.5 0.6
Table 6. Score of the membership degrees of ( G , Q ) .
Table 6. Score of the membership degrees of ( G , Q ) .
O r 1 r 2 r 5
o 1 1.6 0.5 1.6
o 2 0.1 0.2 0.4
o 3 1.3 0.6 0
o 4 0.1 1.4 1.2
Table 7. Final score ( S i ) of the membership degrees of ( G , Q ) .
Table 7. Final score ( S i ) of the membership degrees of ( G , Q ) .
O r 1 r 2 r 5 S i
o 1 1.6 0.5 1.6 0.5
o 2 0.1 0.2 0.4 0.1
o 3 1.3 0.6 0 0.7
o 4 0.1 1.4 1.2 2.5
Table 8. Tabular representation of BFSS ( G , R ) .
Table 8. Tabular representation of BFSS ( G , R ) .
O / R r 1 r 2 r 3 r 4
o 1 ( 0.7 , 0.5 ) ( 0.6 , 0.6 ) ( 0.8 , 0.6 ) ( 0.7 , 0.5 )
o 2 ( 0.2 , 0.9 ) ( 0.7 , 0.4 ) ( 0.2 , 0.5 ) ( 0.8 , 0.4 )
o 3 ( 0.4 , 0.5 ) ( 0.8 , 0.5 ) ( 0.1 , 0.7 ) ( 0.9 , 0.6 )
o 4 ( 0.8 , 0.3 ) ( 0.9 , 0 ) ( 0.5 , 0.6 ) ( 0.2 , 0.4 )
o 5 ( 0.6 , 0.4 ) ( 0.2 , 0.7 ) ( 0.3 , 0.6 ) ( 0.1 , 0.9 )
Table 9. Score of the positive membership degrees of ( G , R ) .
Table 9. Score of the positive membership degrees of ( G , R ) .
+ r 1 r 2 r 3 r 4
o 1 0.8 0.2 2.1 0.8
o 2 1.7 0.3 0.9 1.3
o 3 0.7 0.8 1.4 1.8
o 4 1.3 1.3 0.6 1.7
o 5 0.3 0.7 0.4 2.2
Table 10. Score of the positive membership degrees of ( G , R ) .
Table 10. Score of the positive membership degrees of ( G , R ) .
r 1 r 2 r 3 r 4
o 1 0.1 0.8 0 0.3
o 2 1.9 0.2 0.5 0.8
o 3 0.1 0.3 0.5 0.2
o 4 1.1 2.2 0 0.8
o 5 0.6 1.3 0 1.7
Table 11. Score of the membership degrees of ( G , R ) .
Table 11. Score of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4
o 1 0.9 1.0 2.1 1.1
o 2 3.6 0.1 0.4 2.1
o 3 0.6 0.5 1.9 1.6
o 4 2.4 3.5 0.6 0.9
o 5 0.9 3.3 0.4 3.9
Table 12. Final score ( S i ) of the membership degrees of ( G , R ) .
Table 12. Final score ( S i ) of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4 S i
o 1 0.9 1.0 2.1 1.1 3.1
o 2 3.6 0.1 0.4 2.1 2.0
o 3 0.6 0.5 1.9 1.6 0.4
o 4 2.4 3.5 0.6 0.9 5.6
o 5 0.9 3.3 0.4 3.9 6.7
Table 13. OCB-PR.
Table 13. OCB-PR.
O r 1 S i
o 1 1.6 1.6
o 2 0.1 0.1
o 3 1.3 1.3
o 4 0.1 0.1
Table 14. Tabular arrangement of BFSS ( G , R ) .
Table 14. Tabular arrangement of BFSS ( G , R ) .
O / R r 1 r 2 r 3
o 1 ( 0.2 , 0.5 ) ( 0.1 , 0.6 ) ( 0.2 , 0.8 )
o 2 ( 0.3 , 0.6 ) ( 0.3 , 0.5 ) ( 0.4 , 0.3 )
o 3 ( 0.4 , 0.3 ) ( 0.6 , 0.1 ) ( 0.5 , 0.3 )
o 4 ( 0.7 , 0.2 ) ( 0.4 , 0.4 ) ( 0.7 , 0.3 )
Table 15. Score of the positive membership degrees of ( G , R ) .
Table 15. Score of the positive membership degrees of ( G , R ) .
+ r 1 r 2 r 3
o 1 0.8 1.0 1.0
o 2 0.4 0.2 0.2
o 3 0 1.0 0.2
o 4 1.2 0.2 1.0
Table 16. Score of the negative membership degrees of ( G , R ) .
Table 16. Score of the negative membership degrees of ( G , R ) .
r 1 r 2 r 3
o 1 0.4 0.8 1.5
o 2 0.8 0.4 0.5
o 3 0.4 1.2 0.5
o 4 0.8 0 0.5
Table 17. Score of the membership degrees of ( G , R ) .
Table 17. Score of the membership degrees of ( G , R ) .
O r 1 r 2 r 3
o 1 1.2 1.8 2.5
o 2 1.2 0.6 0.3
o 3 0.4 2.2 0.7
o 4 2.0 0.2 1.5
Table 18. Final score ( S i ) of the membership degrees of ( G , R ) .
Table 18. Final score ( S i ) of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 S i
o 1 1.2 1.8 2.5 5.5
o 2 1.2 0.6 0.3 0.3
o 3 0.4 2.2 0.7 3.3
o 4 2.0 0.2 1.5 3.7
Table 19. IRDCB-PR.
Table 19. IRDCB-PR.
O r 3 S i
o 1 2.5 2.5
o 2 0.3 0.3
o 3 0.7 0.7
o 4 1.5 1.5
Table 20. Tabular representation of BFSS ( G , R ) .
Table 20. Tabular representation of BFSS ( G , R ) .
O / R r 1 r 2 r 3 r 4 r 5
o 1 ( 0.4 , 0.5 ) ( 0.1 , 0.6 ) ( 0.2 , 0.3 ) ( 0.8 , 0.5 ) ( 0.5 , 0.4 )
o 2 ( 0.3 , 0.6 ) ( 0.3 , 0.5 ) ( 0.1 , 0.4 ) ( 0.4 , 0.7 ) ( 0.6 , 0.3 )
o 3 ( 0.4 , 0.5 ) ( 0.6 , 0.1 ) ( 0.2 , 0.3 ) ( 0.7 , 0.3 ) ( 0.5 , 0.4 )
o 4 ( 0.5 , 0.6 ) ( 0.4 , 0.4 ) ( 0.3 , 0.4 ) ( 0.5 , 0.7 ) ( 0.6 , 0.5 )
o 5 ( 0.3 , 0.4 ) ( 0.4 , 0.3 ) ( 0.1 , 0.2 ) ( 0.6 , 0.2 ) ( 0.4 , 0.3 )
o 6 ( 0.3 , 0.3 ) ( 0.7 , 0.2 ) ( 0.1 , 0.1 ) ( 0.8 , 0.1 ) ( 0.3 , 0.3 )
Table 21. Score of the positive membership degrees of ( G , R ) .
Table 21. Score of the positive membership degrees of ( G , R ) .
+ r 1 r 2 r 3 r 4 r 5
o 1 0.2 1.9 0.2 1.0 0.1
o 2 0.4 0.7 0.4 1.4 0.7
o 3 0.2 1.1 0.2 0.4 0.1
o 4 0.8 0.1 0.8 0.8 0.7
o 5 0.4 0.1 0.4 0.2 0.5
o 6 0.4 1.7 0.4 1.0 1.1
Table 22. Score of the negative membership degrees of ( G , R ) .
Table 22. Score of the negative membership degrees of ( G , R ) .
r 1 r 2 r 3 r 4 r 5
o 1 0.1 1.5 0.1 0.5 0.2
o 2 0.7 0.9 0.7 1.7 0.4
o 3 0.1 1.5 0.1 0.9 0.2
o 4 0.7 0.3 0.7 1.7 0.8
o 5 0.5 0.3 0.5 1.3 0.4
o 6 1.1 0.9 1.1 1.9 0.4
Table 23. Score of the membership degrees of ( G , R ) .
Table 23. Score of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4 r 5
o 1 0.1 3.4 0.1 0.5 0.1
o 2 1.1 1.6 1.1 3.1 1.1
o 3 0.1 2.6 0.1 1.3 0.1
o 4 0.1 0.4 0.1 2.5 0.1
o 5 0.1 0.2 0.1 1.1 0.1
o 6 0.7 2.6 0.7 2.9 0.7
Table 24. Final score ( S i ) of the membership degrees of ( G , R ) .
Table 24. Final score ( S i ) of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4 r 5 S i
o 1 0.1 3.4 0.1 0.5 0.1 2.8
o 2 1.1 1.6 1.1 3.1 1.1 5.8
o 3 0.1 2.6 0.1 1.3 0.1 4.0
o 4 0.1 0.4 0.1 2.5 0.1 2.8
o 5 0.1 0.2 0.1 1.1 0.1 1.4
o 6 0.7 2.6 0.7 2.9 0.7 6.2
Table 25. N-PR.
Table 25. N-PR.
O r 1 r 2 r 4 S i
o 1 0.1 3.4 0.5 2.8
o 2 1.1 1.6 3.1 5.8
o 3 0.1 2.6 1.3 4.0
o 4 0.1 0.4 2.5 2.8
o 5 0.1 0.2 1.1 1.4
o 6 0.7 2.6 2.9 6.2
Table 26. Tabular representation of BFSS ( G , R ) .
Table 26. Tabular representation of BFSS ( G , R ) .
O / R r 1 r 2 r 3 r 4 r 5
o 1 ( 0.5 , 0.6 ) ( 0.4 , 0.5 ) ( 0.9 , 0.4 ) ( 0.4 , 0.7 ) ( 0.5 , 0.4 )
o 2 ( 0.7 , 0.5 ) ( 0.6 , 0.3 ) ( 0.5 , 0.7 ) ( 0.8 , 0.3 ) ( 0.6 , 0.2 )
o 3 ( 0.4 , 0.7 ) ( 0.4 , 0.2 ) ( 0.9 , 0.4 ) ( 0.3 , 0.3 ) ( 0.5 , 0.4 )
o 4 ( 0.3 , 0.2 ) ( 0.5 , 0.3 ) ( 0.6 , 0 ) ( 0.4 , 0.6 ) ( 0.6 , 0.5 )
o 5 ( 0.6 , 0.4 ) ( 0.6 , 0.7 ) ( 0.5 , 0.4 ) ( 0.7 , 0.1 ) ( 0.4 , 0.3 )
o 6 ( 0.5 , 0.2 ) ( 0.7 , 0.7 ) ( 0.5 , 0.6 ) ( 0.6 , 0.7 ) ( 0.3 , 0.3 )
Table 27. Score of the positive membership degrees of ( G , R ) .
Table 27. Score of the positive membership degrees of ( G , R ) .
+ r 1 r 2 r 3 r 4 r 5
o 1 0 0.2 1.5 1.0 0.1
o 2 1.2 0.4 0.9 1.6 0.7
o 3 0.6 0.2 1.5 1.4 0.1
o 4 1.2 0.8 0.3 1.0 0.7
o 5 0.6 0.4 0.9 1.0 0.5
o 6 0 0.4 0.9 0.4 1.1
Table 28. Score of the negative membership degrees of ( G , R ) .
Table 28. Score of the negative membership degrees of ( G , R ) .
r 1 r 2 r 3 r 4 r 5
o 1 1.0 0.1 0.1 1.5 0.3
o 2 0.4 0.7 1.7 0.9 1.1
o 3 1.6 0.1 0.1 0.9 0.3
o 4 1.6 0.7 2.5 0.9 0.9
o 5 0.2 0.5 0.1 2.1 0.3
o 6 1.6 1.1 1.1 1.5 0.3
Table 29. Score of the membership degrees of ( G , R ) .
Table 29. Score of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4 r 5
o 1 1.0 0.1 1.6 2.5 0.2
o 2 0.8 1.1 2.6 2.5 1.8
o 3 2.2 0.1 1.6 0.5 0.2
o 4 0.4 0.1 2.2 1.9 0.2
o 5 0.8 0.1 0.8 3.1 0.2
o 6 1.6 0.7 2.0 1.1 0.8
Table 30. Final score ( S i ) of the membership degrees of ( G , R ) .
Table 30. Final score ( S i ) of the membership degrees of ( G , R ) .
O r 1 r 2 r 3 r 4 r 5 S i
o 1 1.0 0.1 1.6 2.5 0.2 2.0
o 2 0.8 1.1 2.6 2.5 1.8 0.2
o 3 2.2 0.1 1.6 0.5 0.2 1.2
o 4 0.4 0.1 2.2 1.9 0.2 0.6
o 5 0.8 0.1 0.8 3.1 0.2 3.0
o 6 1.6 0.7 2.0 1.1 0.8 1.6
Table 31. AN-PR.
Table 31. AN-PR.
O r 1 r 2 r 4 S i
o 1 1.0 1.6 2.5 1.9
o 2 0.8 2.6 2.5 0.9
o 3 2.2 1.6 0.5 1.1
o 4 0.4 2.2 1.9 0.7
o 5 0.8 0.8 3.1 3.1
o 6 1.6 2.0 1.1 1.5
Table 32. OCB-PR in Example 2.
Table 32. OCB-PR in Example 2.
O r 2 S i
o 1 1.0 1.0
o 2 0.1 0.1
o 3 0.5 0.5
o 4 3.5 3.5
o 5 3.3 3.3
Table 33. Added parameters’ scores.
Table 33. Added parameters’ scores.
O a 1 a 2
o 1 1.7 0.5
o 2 0.3 0
o 3 2.3 0.5
o 4 0.2 1.5
o 5 0.7 1.5
Table 34. Combination of Table 12 and Table 33.
Table 34. Combination of Table 12 and Table 33.
O r 1 r 2 r 3 r 4 a 1 a 2 S i
o 1 0.9 1.0 2.1 1.1 1.7 0.5 5.3
o 2 3.6 0.1 0.4 2.1 0.3 0 2.3
o 3 0.6 0.5 1.9 1.6 2.3 0.5 3.2
o 4 2.4 3.5 0.6 0.9 0.2 1.5 4.3
o 5 0.9 3.3 0.4 3.9 0.7 1.5 4.5
Table 35. Combination of Table 32 and Table 33.
Table 35. Combination of Table 32 and Table 33.
O r 2 a 1 a 2 S i
o 1 1.0 1.7 0.5 1.2
o 2 0.1 0.3 0 0.4
o 3 0.5 2.3 0.5 2.3
o 4 3.5 0.2 1.5 2.2
o 5 3.3 0.7 1.5 1.1
Table 36. Added parameters’ scores.
Table 36. Added parameters’ scores.
O a 1 a 2
o 1 0.8 1.6
o 2 0.4 0.4
o 3 0.8 0
o 4 0.4 1.2
Table 37. Combination of Table 18 and Table 36.
Table 37. Combination of Table 18 and Table 36.
O r 1 r 2 r 3 a 1 a 2 S i
o 1 1.2 1.8 2.5 0.8 1.6 3.1
o 2 1.2 0.6 0.3 0.4 0.4 0.3
o 3 0.4 2.2 0.7 0.8 0 2.5
o 4 2.0 0.2 1.5 0.4 1.2 2.1
Table 38. Combination of Table 18 and Table 36.
Table 38. Combination of Table 18 and Table 36.
O r 3 a 1 a 2 S i
o 1 2.5 0.8 1.6 0.1
o 2 0.3 0.4 0.4 0.3
o 3 0.7 0.8 0 0.1
o 4 1.5 0.4 1.2 0.1
Table 39. Added parameters’ scores.
Table 39. Added parameters’ scores.
O r 1 r 2
o 1 1.9 0.1
o 2 0.7 1.1
o 3 0.5 1.7
o 4 0.5 3.5
o 5 0.1 3.7
o 6 1.7 2.5
Table 40. Final score ( S i ) of the membership degrees of ( G , R ) with added parameters.
Table 40. Final score ( S i ) of the membership degrees of ( G , R ) with added parameters.
O r 1 r 2 r 3 r 4 r 5 r 1 r 2 S i
o 1 0.1 3.4 0.1 0.5 0.1 1.9 0.1 1.0
o 2 1.1 1.6 1.1 3.1 1.1 0.7 1.1 4.0
o 3 0.1 2.6 0.1 1.3 0.1 0.5 1.7 5.2
o 4 0.1 0.4 0.1 2.5 0.1 0.5 3.5 0.2
o 5 0.1 0.2 0.1 1.1 0.1 0.1 3.7 2.2
o 6 0.7 2.6 0.7 2.9 0.7 1.7 2.5 2.0
Table 41. N-PR.
Table 41. N-PR.
O r 1 r 2 r 4 r 1 r 2 S i
o 1 0.1 3.4 0.5 1.9 0.1 1.0
o 2 1.1 1.6 3.1 0.7 1.1 4.0
o 3 0.1 2.6 1.3 0.5 1.7 5.2
o 4 0.1 0.4 2.5 0.5 3.5 0.2
o 5 0.1 0.2 1.1 0.1 3.7 2.2
o 6 0.7 2.6 2.9 1.7 2.5 2.0
Table 42. Combination of Table 30 and Table 39.
Table 42. Combination of Table 30 and Table 39.
O r 1 r 2 r 3 r 4 r 5 r 1 r 2 S i
o 1 1.0 0.1 1.6 2.5 0.2 1.9 0.1 0.2
o 2 0.8 1.1 2.6 2.5 1.8 0.7 1.1 1.6
o 3 2.2 0.1 1.6 0.5 0.2 0.5 1.7 0
o 4 0.4 0.1 2.2 1.9 0.2 0.5 3.5 3.6
o 5 0.8 0.1 0.8 3.1 0.2 0.1 3.7 0.6
o 6 1.6 0.7 2.0 1.1 0.8 1.7 2.5 5.7
Table 43. Combination of Table 31 and Table 39.
Table 43. Combination of Table 31 and Table 39.
O r 1 r 2 r 4 r 1 r 2 S i
o 1 1.0 1.6 2.5 1.9 0.1 0.1
o 2 0.8 2.6 2.5 0.7 1.1 0.9
o 3 2.2 1.6 0.5 0.5 1.7 0.1
o 4 0.4 2.2 1.9 0.5 3.5 3.7
o 5 0.8 0.8 3.1 0.1 3.7 0.5
o 6 1.6 2.0 1.1 1.7 2.5 5.7
Table 44. Tabular representation of BFSS ( G , R ) .
Table 44. Tabular representation of BFSS ( G , R ) .
O / R r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 r 10
o 1 ( 0.7 , 0.5 ) ( 0.5 , 0.6 ) ( 0.4 , 0.7 ) ( 0.3 , 0.5 ) ( 0.5 , 0.4 ) ( 0.9 , 0.6 ) ( 0.3 , 0.5 ) ( 0.8 , 0.6 ) ( 0.7 , 0.1 ) ( 0.6 , 0.7 )
o 2 ( 0.4 , 0.6 ) ( 0.7 , 0.5 ) ( 0.6 , 0.3 ) ( 0.6 , 0.7 ) ( 0.6 , 0.8 ) ( 0.6 , 0.7 ) ( 0.6 , 0.7 ) ( 0.6 , 0.5 ) ( 0.5 , 0 ) ( 0.8 , 0.6 )
o 3 ( 0.6 , 0.3 ) ( 0.4 , 0.7 ) ( 0.5 , 0.8 ) ( 0.5 , 0.5 ) ( 0.4 , 0.7 ) ( 0.8 , 0.8 ) ( 0.5 , 0.7 ) ( 0.3 , 0.7 ) ( 0.3 , 0.6 ) ( 0.4 , 0.6 )
o 4 ( 0.6 , 0.6 ) ( 0.3 , 0.2 ) ( 0.5 , 0.6 ) ( 0.8 , 0.4 ) ( 0.5 , 0.3 ) ( 0.5 , 0.3 ) ( 0.7 , 0.2 ) ( 0.2 , 0.2 ) ( 0.2 , 0.1 ) ( 0.5 , 0.3 )
o 5 ( 0.5 , 0.3 ) ( 0.1 , 0.8 ) ( 0.2 , 0.6 ) ( 0.7 , 0.5 ) ( 0.7 , 0.5 ) ( 0.3 , 0.2 ) ( 0.8 , 0.4 ) ( 0.4 , 0.4 ) ( 0.5 , 0.3 ) ( 0.7 , 0.1 )
o 6 ( 0.6 , 0.5 ) ( 0.6 , 0.7 ) ( 0.7 , 0.4 ) ( 0.6 , 0.3 ) ( 0.4 , 0.3 ) ( 0.2 , 0.6 ) ( 0.6 , 0.2 ) ( 0.4 , 0.2 ) ( 0.4 , 0.1 ) ( 0.6 , 0.5 )
o 7 ( 0.8 , 0.4 ) ( 0.6 , 0.4 ) ( 0.8 , 0.3 ) ( 0.5 , 0.7 ) ( 0.5 , 0.5 ) ( 0.8 , 0.2 ) ( 0.5 , 0.4 ) ( 0.5 , 0.4 ) ( 0.5 , 0.5 ) ( 0.9 , 0.4 )
o 8 ( 0.8 , 0.1 ) ( 0.5 , 0.2 ) ( 0.2 , 0.2 ) ( 0.3 , 0.4 ) ( 0.4 , 0.6 ) ( 0.6 , 0.7 ) ( 0.7 , 0.2 ) ( 0.8 , 0.4 ) ( 0.4 , 0.7 ) ( 0.5 , 0.6 )
o 9 ( 0.3 , 0.5 ) ( 0.7 , 0.3 ) ( 0.5 , 0.7 ) ( 0.5 , 0.6 ) ( 0.8 , 0.5 ) ( 0.5 , 0.7 ) ( 0.9 , 0.4 ) ( 0.6 , 0.5 ) ( 0.4 , 0.5 ) ( 0.7 , 0.7 )
o 10 ( 0.6 , 0.5 ) ( 0.4 , 0.6 ) ( 0.6 , 0.7 ) ( 0.2 , 0.6 ) ( 0.7 , 0.8 ) ( 0.9 , 0.3 ) ( 1.0 , 0.2 ) ( 0.5 , 0.5 ) ( 0.7 , 0.4 ) ( 0.6 , 0.5 )
o 11 ( 0.9 , 0 ) ( 0.8 , 0.3 ) ( 0.6 , 0.4 ) ( 0.9 , 0.2 ) ( 0.6 , 0.8 ) ( 0.8 , 0.6 ) ( 0.8 , 0.5 ) ( 0.6 , 0.7 ) ( 0.8 , 0.5 ) ( 0.8 , 0.6 )
o 12 ( 0.2 , 0.5 ) ( 0.7 , 0.7 ) ( 0.5 , 0.2 ) ( 0 , 0.5 ) ( 0.4 , 0.5 ) ( 0.5 , 0 ) ( 0.4 , 0.7 ) ( 0.4 , 0.8 ) ( 0.7 , 0.6 ) ( 0.3 , 0.5 )
Table 45. Score of the positive membership degrees of ( G , R ) .
Table 45. Score of the positive membership degrees of ( G , R ) .
+ r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 r 10
o 1 1.4 0.3 1.3 2.3 0.5 3.4 4.2 3.5 2.3 0.2
o 2 2.2 2.1 1.1 1.3 0.7 0.2 0.6 1.1 0.1 2.2
o 3 0.2 1.5 0.1 0.1 1.7 2.2 1.8 2.5 2.5 2.6
o 4 0.2 2.7 0.1 3.7 0.5 1.4 0.6 3.7 3.7 1.4
o 5 1.0 5.1 3.7 2.5 1.9 3.8 1.8 1.3 0.1 1.0
o 6 0.2 0.9 2.3 1.3 1.7 5.