1. Introduction
There are many decision-making issues in physical sciences, applied sciences, social sciences, and life sciences often containing datasets having uncertain and vague information. Fuzzy set theory [
1] and rough set theory [
2] are classical mathematical tools to characterize uncertain and inaccurate data, but as indicated in [
3,
4], each of these theories lacks theoretical parametric tools. Molodtsov [
3] initiated the concept of the soft set as a new mathematical tool for handling vague and uncertain information. Molodtsov [
3] efficiently implemented soft set theory in multiple directions, for example operations research, game theory, Riemann integral, and probability. Currently, the research on soft set theory is proceeding rapidly and has achieved many fruitful results. Some fundamental algebraic properties of soft sets were proposed by Maji et al. [
5]. Feng et al. [
6] proposed new hybrid models by combining fuzzy sets, rough sets, and soft sets. Maji and Roy [
7] presented a technique to solve decision-making problem-based fuzzy soft sets. Xiao et al. [
8] developed a forecasting method based the fuzzy soft set model.
Henceforth, much research on parameter reduction has been completed, and several results have been derived [
9,
10,
11,
12,
13,
14]. Research based on soft set theory to solve decision-making problems derives from the concept of parameter reduction. The reduction of parameters in soft set theory is designed to remove redundant parameters while preserving the original decision choices. Maji et al. [
15] first solved soft set decision-making problems using rough set-based reduction [
16]. To improve decision-making problems in [
15], Chen et al. [
17] and Kong et al. [
18] respectively proposed parameterization reduction and normal parameter reduction of soft sets. Ma et al. [
19] proposed a new efficient algorithm of normal parameter reduction to improve [
15,
16,
17]. Roy and Maji [
7] proposed a new method for dealing with decision-making problems with fuzzy soft sets. The method deals with a comparison table derived from a fuzzy soft set in the sense of parameters to make a decision. Kong et al. [
20] indicated that the Roy and Maji technique [
7] was inaccurate, and they proposed a modified approach to solve this issue. They described the effectiveness of the Roy and Maji technique [
7] and demonstrated its boundaries. Ma et al. [
21] proposed extensive parameter reduction methods for interval-valued fuzzy soft sets. Decision-making research for the reduction of fuzzy soft sets has been given considerable attention. Using the idea of the level soft set, Feng et al. [
22] gave the idea of the parameter reduction of fuzzy soft sets and proposed an adaptable method to decision-making based on fuzzy soft sets. Moreover, Feng et al. [
23] presented another intuition about decision-making-based interval-valued fuzzy soft sets. Jiang et al. [
24] proposed a reduction method of intuitionistic fuzzy soft sets for decision-making using the level soft sets of intuitionistic fuzzy sets. The theory of fuzzy systems has rich applications in different areas, including engineering [
25,
26,
27]. Zhang [
28,
29] first proposed the idea of bipolar fuzzy sets (Yin Yang bipolar fuzzy sets) in the space
as an extension of fuzzy sets. In the case of bipolar fuzzy sets, membership degree range is enlarged from the interval
–
. The idea behind this description is related to the existence of bipolar information. For example, profit and loss, feedback and feed-forward, competition and cooperation, etc., are usually two aspects of decision-making. In Chinese medicine, Yin and Yang are the two sides. Yin is the negative side of a system, and Yang is the positive side of a system. Bipolar fuzzy set theory has many applications in different fields, including pattern recognition and machine learning. Saleem et al. [
30] presented a new hybrid model, namely bipolar fuzzy soft sets, by combining bipolar fuzzy sets with soft sets. Motivated by these concerns, in this paper, we present four ways to reduce parameters in bipolar fuzzy soft sets by developing another bipolar fuzzy soft set theoretical approach to solve decision-making problems. In particular, we solve the decision-making problem in [
30] by our proposed decision-making algorithm-based bipolar fuzzy soft sets. We propose an algorithm of each reduction technique. Furthermore, we compare these reduction methods and discuss their pros and cons in detail. We also present a real-life application to show the validity of our proposed reduction algorithms. For other terminologies not mentioned in the paper, the readers are referred to [
31,
32,
33,
34,
35,
36,
37,
38,
39].
The rest of this paper is structured as follows.
Section 2 introduces the basic definitions and develops a new technique for decision-making-based bipolar fuzzy soft sets.
Section 3 defines four kinds of parameter reductions of bipolar fuzzy soft sets and presents their reduction algorithms, which are illustrated by corresponding examples. A comparison among the reduction algorithms is presented in
Section 4.
Section 5 is devoted to solving a real-life decision-making application. In the end, the conclusions of this paper are provided in
Section 6. Throughout this paper, the following notations given in
Table 1 will be used.
2. Another Bipolar Fuzzy Soft Sets Approach to Decision-Making Problems
Saleem et al. [
30] presented an efficient approach to solve practical decision-making problems based on bipolar fuzzy soft sets. In this section, we first review the definitions of bipolar fuzzy sets and bipolar fuzzy soft sets, and then, we introduce a novel approach based on bipolar fuzzy soft sets, which can effectively solve decision-making problems, followed by an algorithm. Moreover, we use our proposed algorithm to solve the decision-making application presented by Saleem et al. [
30] and observe that the optimal decisions obtained by both methods are the same.
Definition 1. [29,40] Let O be a nonempty universe of objects. A bipolar fuzzy set B in O is defined as:where and are mappings. The positive membership degree denotes the satisfaction degree of an object o for the property corresponding to a bipolar fuzzy set B, and the negative membership degree denotes the satisfaction degree of an object o for some implicit counter-property corresponding to a bipolar fuzzy set B. Definition 2. [30] Let O be a nonempty universe of objects and R a universe of parameters related to objects in O. A pair is called a BFSS over universe O, where G is a mapping from R into . It is defined as follows: Assume that
is a universe of objects and
is a universe of parameters related to objects in
O. Then, a BFSS
can also be presented by tabular arrangement, as shown in
Table 2.
Definition 3. Let be a universe of objects and a universal set of parameters associated with objects in O. For a BFSS , and are the membership and non-membership degrees of each element to and , respectively. We define the score of the positive and negative membership degrees for each as: Definition 4. Let be a universe of objects and a universal set of parameters associated with objects in O. For a BFSS , the score of the membership degrees for is given by:where and are the scores of positive and negative membership degrees for each , respectively. Definition 5. Let be a universe of objects, and a universal set of parameters associated with objects in O. For a BFSS , the final score for each object denoted by is defined as follows: We present a new decision-making technique based on BFSSs as follows:
Example 1. Reconsider Example 8 in [30]. Let be the set of four cars, a collection of parameters, and . Then, a BFSS is given by Table 3. We proceed to applying Algorithm 1 to . By using (2) and (3), the scores of positive and negative membership degrees for and are given by: Now, by using Definition 4, the tabular arrangement for the score of the membership degrees of BFSS is given by Table 6. By Definition 5, the final score of each car is given by Table 7. By way of illustration, Clearly, is the maximum score for the object . Thus, is the decision object, which coincides with the decision obtained in [30]. Algorithm 1 Selection of an object based on BFSSs. |
Input O, a universal set having n objects. R, a universe of parameters with m elements. , a BFSS, which is given by Definition 2. Find the score of the positive and negative membership degrees, where and Calculate the score of the membership degrees where and by Definition 4. Evaluate the final score for each object , by Definition 5. Compute all indices l for which . Output The decision will be any corresponding to the list of indices obtained in Step 5.
|
Example 2. Let be a collection of five objects under consideration and a collection of parameters related to the objects in O. Then, a BFSS is given by Table 8. By using (2) and (3), the score of the positive and negative membership degrees for and are given by Table 9 and Table 10, respectively. Now, by using Definition 4, the score of membership degrees for and of BFSS is given by Table 11. Using , the final score of each object is given by Table 12. Clearly, is the maximum score for the object , which coincides with the decision object obtained using the decision-making algorithm in [30]. From the above analysis, it can be easily perceived that our proposed decision-making approach based on BFSSs is efficient and reliable. However, in a realistic perspective, it contains redundant parameters for decision-making. To overcome this issue, the parameter reduction of BFSS is proposed. A parameter reduction is a technique in which the set of parameters is reduced to obtain a minimal subset that gives the same decision as the whole set.
3. Four Types of Parameter Reductions of BFSSs
1. OCB-PR:
We first define OCB-PR and then provide an algorithmic approach to obtain it, which is illustrated via an example.
Definition 6. Let be a universe of objects and a universal set of parameters associated with objects in O. For a BFSS , denote as the family of objects in O, which takes the maximum value of . For each , if , then B is said to be dispensable in R, else B is called indispensable in the set R. The parameter set R is called independent if every is indispensable in R, else R is dependent. A subset P of R is said to be an OCB-PR of R if the following axioms hold.
- 1.
P is independent (that is, P is the smallest subset of R that keeps the optimal decision object invariant).
- 2.
.
Based on Definition 6, we propose an OCB-PR algorithm that deletes redundant parameters while keeping the optimal decision object unchanged.
Example 3. Let be the set of four cars and a collection of parameters. Reconsider a BFSS as in Example 8 [30], where . We proceed to applying Algorithm 2 to the BFSS . From Table 7, we compute that for , we obtain . Hence, is an OCB-PR of BFSS given by Table 13. From Table 13, it can be easily observed that is the optimal decision object after reduction. Clearly, the subset is minimal, which keeps the optimal decision object unchanged. Algorithm 2 OCB-PR. |
Input O, a universal set having n objects. R, a universe of parameters with m elements. , a BFSS, which is given by Definition 2. Calculate the score of the positive and negative membership degrees for and Calculate the score of membership degrees for and by using . Evaluate the final score for each object , by Definition 5. Compute all that satisfy the following condition:
Output The set is referred as an OCB-PR of BFSS , if there does not exist such that they satisfy ( 6), then there is no OCB-PR of BFSS .
|
2. IRDCB-PR:
There are several real situations in which our main task is to compute the rank of optimal and suboptimal choices. The suboptimal choices are not considered by the OCB-PR method because OCB-PR only studies the optimal choice. To overcome this drawback, we define IRDCB-PR and present an algorithmic approach that keeps the rank of optimal and suboptimal choices unchanged after deleting the irrelevant parameters.
Definition 7. Let be a universe of objects, a universe of parameters, and . For a BFSS , an indiscernibility relation is given by:where . For an arbitrary BFSS over , the decision partition is given by:where for each subclass and that is, there are z subclasses. Actually, objects are ranked with respect to the score value of , where . Definition 8. Let be a universe of objects and a universal set of parameters associated with objects in O, and let be a BFSS. For each , if , then B is said to be dispensable in R, else B is referred to as an indispensable set in the set R. The parameter set R is said to be independent if each is indispensable in R, else R is dependent. A subset P of R is said to be an IRDCB-PR of R if the following axioms hold.
- 1.
P is independent (that is, P is the minimal subset of R that keeps the rank of optimal and suboptimal decision choices unchanged).
- 2.
.
Based on Definition 8, we propose an IRDCB-PR algorithm (see Algorithm 3) that deletes irrelevant parameters while keeping the rank of optimal and suboptimal decision choice objects unchanged.
Algorithm 3 IRDCB-PR. |
Input O, a universal set having n objects. R, a universe of parameters with m elements. , a BFSS, which is given by Definition 2. Calculate the score of the positive and negative membership degrees for and Calculate the score of membership degrees for and by using . Evaluate the final score for each object , by using . Compute all that satisfy the following condition:
Output The set is referred to as an IRDCB-PR of BFSS , if there does not exist that satisfy ( 7), then there is no IRDCB-PR of BFSS .
|
Example 4. Let be a universal set of four objects and a set of parameters related to the objects in O. Then, a BFSS is given by Table 14. By using (2) and (3), the scores of positive and negative membership degrees for and are given by Table 15 and Table 16, respectively. Now, by using Definition 4, the tabular arrangement for the score of membership degrees where and of is given by Table 17. From , the final score of each object is given by Table 18. Clearly, is the maximum score for the object . Thus, is the optimal decision object, which coincides with the decision obtained using the algorithm in [30]. From Table 16, it can readily be computed: Using Algorithm 3, we can proceed further by examining the subsets of R. Thus, for , we have with . Note that after reduction, the rank and partition of objects are not changed. Hence, (not all) is the IRDCB-PR of BFSS given by Table 19. Clearly, is minimal that keeps the rank of decision choices unchanged.
3. N-PR:
The parameter reduction techniques such as OCB-PR and IRDCB-PR are not always workable in many practical applications. Therefore, we provide the normal parameter reduction of BFSSs, which studies the issues of added parameters and suboptimal choice. We present a definition of N-PR and provide an algorithmic method to obtain it, which are illustrated via an example.
Definition 9. Let be a universe of objects and a universal set of parameters associated with objects in O. For a BFSS , B is called dispensable if there exist that satisfy the following expression. Else, B is indispensable. A subset is called N-PR of R, if the following axioms hold.
- 1.
P is indispensable.
- 2.
Based on Definition 9, we propose the N-PR algorithm (Algorithm 4) as follows.
Algorithm 4 N-PR. |
Input O, a universal set having n objects. R, a universe of parameters with m elements. , a BFSS, which is given by Definition 2. Calculate the score of the positive and negative membership degrees for and Calculate the score of membership degrees for and by Definition 4. Evaluate the final score for each object , by Definition 5. For each with the cardinality , we check if it verifies the following condition:
Output If any of these subsets verifies the condition ( 8), then we select any of their complements in R as an optimal N-PR. Otherwise, for each with cardinality , we check if it verifies the condition ( 8), then we select as an optimal N-PR, and so on; if there does not exist such that satisfy ( 8), then there is no N-PR of BFSS .
|
Example 5. Let be the set of six objects and a set of parameters. Then, a BFSS is defined by Table 20. By using (2) and (3), the scores of the positive and negative membership degrees for and are described by Table 21 and Table 22, respectively. Now, by using Definition 9, the tabular arrangement for the score of membership degrees where and of is given by Table 23. From , the final score of each object is given by Table 24. Clearly, is the maximum score for the object . Thus, is the optimum decision object. From Table 24, it can be easily observed that , satisfying: Thus, (not all) is the N-PR of BFSS given by Table 25. Clearly, N-PR method maintains the invariable rank of decision choices, as well as takes into account immutable differences between the decision choice objects. Thus, if we add new parameters in the set of parameters, there is no need to compute new reduction again. The issue of added parameters is discussed by examples in Section 4. 4. AN-PR:
N-PR is an outstanding technique for the reduction of parameters. It is very difficult to compute N-PR taking into account that BFSS provides bipolar information to explain membership degrees. To improve this method, we propose a new reduction method, namely AN-PR, which is a compromise between IRDCB-PR and N-PR.
Definition 10. Let be a universe of objects and a universal set of parameters associated with objects in O. For a BFSS , given an arbitrary error value α, if there exists such that:inside the range of α and , then B is dispensable, else B is indispensable. The subset is called an AN-PR of BFSS , when the following three axioms hold. - 1.
P is indispensable.
- 2.
inside the range of α.
- 3.
.
We are ready to propose the AN-PR algorithm (Algorithm 5 below):
Algorithm 5 AN-PR. |
Input O, a universal set having n objects. R, a universe of parameters with m elements. , an error value. , a BFSS, which is given by Definition 2. Calculate the score of the positive and negative membership degrees for and Calculate the score of the membership degrees for and by Definition 4. Evaluate the final score for each object , by Definition 5. For each with the cardinality , we check if it verifies the following expressions:
inside range of and:
Output If any of these subsets verifies the conditions ( 9) and ( 10), then we select any of their complements in R as an optimal AN-PR. Otherwise, for each with cardinality , we check if it verifies the conditions ( 9) and ( 10), then we select any of their complements in R as an optimal AN-PR, and so on; if there does not exist such that satisfy Conditions ( 9) and ( 10), then there is no AN-PR of BFSS .
|
As mentioned earlier, AN-PR is a compromise between IRDCB-PR and N-PR. Note that if there is no limitation from (that is, without ), AN-PR is IRDCB-PR, and when , AN-PR is N-PR. It can be easily observed that the reduction set by AN-PR relies on the outcomes of IRDCB-PR and the provided range , because the AN-PR algorithm is relying on IRDCB-PR. In other words, reduction sets through AN-PR are computed based on the reduction sets through IRDCB-PR. If the computing difference between the highest and lowest sum of scores of reduced parameters is lower than , the set of reduction is referred to as parameter reduction through AN-PR, else it is not referred to as the parameter reduction through AN-PR. Note that the AN-PR method preserves the rank of decision choices.
Example 6. Let be a universal set of six objects and a set of parameters Then, a BFSS is defined by Table 26. By using (2) and (3), the scores of the positive and negative membership degrees for and are given by Table 27 and Table 28, respectively. Now, by using Definition 4, the tabular arrangement for the score of membership degrees where and of BFSS is given by Table 29. From , the final score of each object is given by Table 30. Clearly, is the maximum score for the object . Thus, is an optimal decision object. Given an error value , using Table 30, we can easily compute that , satisfying: From Table 30, . Also, , satisfying . Hence, is an AN-PR of BFSS given in Table 31. 4. Comparison
This section compares our proposed parameter reduction algorithms regarding the EDCR, applicability, exact degree of reduction, reduction result, multi-use of the reduction set, and applied situation.
1. Comparison of EDCR and applicability:
Assume that a coefficient q represents the ratio of correctly-computed parameter reduction in different datasets. In other words, q represents the applicability of our proposed reduction techniques in practical applications and will be interpreted as EDCR. OCB-PR only preserves the optimal decision object. Therefore, a parameter reduction is easy to compute with OCB-PR. For example, is the OCB-PR in Example 3; and are the OCB-PR in Example 4; and are the OCB-PR in Example 5; is the OCB-PR in Example 6. Hence, .
IRDCB-PR is designed to delete the irrelevant parameters by preserving the partitioning and rank of objects. Obviously, parameter reduction using IRDCB-PR is more difficult than OCB-PR. For instance, is the IRDCB-PR in Example 4, and and are the IRDCB-PR in Example 5. We can observe that there is no IRDCB-PR in Examples 3 and 6. Thus, .
N-PR maintains both invariable rank and unchangeable differences between decision choices. Using the N-PR algorithm is the most difficult to obtain parameter reduction as compared to other proposed reduction methods. We can see that and are the N-PRs in Example 5. Unfortunately, there is no N-PR in Examples 3, 4, and 6. Thus,
AN-PR is a compromise between IRDCB-PR and N-PR. Without , AN-PR is IRDCB-PR, and when , AN-PR is N-PR. Thus, the EDCR of AN-PR depends on . Therefore,
2. Comparison of the exact degree of reduction and reduction results:
The exact degree of parameter reduction considers the precision of parameter reduction and its impact on the post-reduction decision object. OCB-PR only keeps the optimal decision object unchanged after reduction (that is, the rank of decision choices may be changed after reduction). Therefore, the exact degree of reduction is lower. IRDCB-PR reduces redundant parameters by preserving the partitioning and rank of objects. Therefore, the exact degree of reduction is higher as compared to OCB-PR. N-PR preserves both rank and unchangeable differences between decision choices. Therefore, its exact degree of reduction is highest.
3. Comparison of the multiple use of the parameter reduction and applied situation:
The multiple use of parameter reduction means that the reduction sets can be reused when the expert demands suboptimal parameters and when he/she adds some new parameters.
(i) Comparison of the multiple use of the parameter reduction and applied situation of OCB-PR:
OCB-PR usually has a wider range of applications. As we know, it only provides the optimal option. After selecting the best choice, if the data of the optimal object are deleted from the dataset, then, for the next decision, we need to make a new reduction again, which wastes much time on the parameter reduction. Furthermore, the added parameter set has not been considered. If new parameters are added to the parameter set, a new reduction is required. We explain these issues by the following example.
Example 7. From Example 2, clearly, is the best option in Table 12. An OCB-PR of is , which is given by Table 32. When the object is deleted from Table 12, the suboptimal choice object is . From Table 32, it can be easily observed that the suboptimal choice is . It is clear that the suboptimal choice has changed. Let be the set of added parameters for the BFSS in Example 2, given by: For the parameters and , the score of membership degrees where and of BFSS is given by Table 33. By combining Table 12 and Table 33, we can observe that is the optimal decision object from Table 34, while by combining Table 32 and Table 33, is the best option from Table 35. Clearly, these two optimal options are different. Thus, OCB-PR has a lower degree of the multiple use of parameter reduction. (ii) Comparison of multi-use of parameter reduction and applied situation of IRDCB-PR:
IRDCB-PR maintains the rank of suboptimal decision choices. However, the issue of added parameters is not solved by the IRDCB-PR method. We give the following example to explain this idea.
Example 8. Let be the set of added parameters for the BFSS in Example 4, given by: For the parameters and , the score of membership degrees is given by Table 36. Combine Table 18 and Table 19 with Table 36. From Table 37, we see that . Similarly, using Table 38, we get . Clearly, the ranks of choice objects in Table 37 and Table 38 are different. From Table 18, we observe that . IRDCB-PR of the BFSS is . We can compute that . Thus, IRDCB-PR preserves the partition and rank of the objects after parameter reduction. From the above analysis, we observe that the issue of suboptimal choice can be solved by the IRDCB-PR method, while the issue of added parameters cannot be solved by the IRDCB-PR technique. (iii) Comparison of the multiple use of parameter reduction and applied situation of N-PR:
The problems of the suboptimal choice and rank of decision choice objects can be solved using N-PR. The following example addresses this issue.
Example 9. Let be the set of added parameters for the BFSS in Example 5, which are given by: For the parameters and , the score of membership degrees is given by Table 39. Combine Table 24 (final score table of BFSS ) and Table 25 (N-PR for the BFSS ) with Table 39 (added parameters’ score table). From Table 40 and Table 41, we can easily compute that and , respectively. Hence, the ranks of decision choices are the same. Thus, N-PR has the highest degree of the multiple use of reduction sets. (iv) Comparison of the multiple use of parameter reduction and applied situation of AN-PR
No doubt, N-PR is a suitable approach for parameter reduction, but it is very hard to compute the N-PR because BFSS provides bipolar information to describe membership degrees. To reduce this computational difficulty, AN-PR is given as a compromise between IRDCB-PR and N-PR.
Example 10. By combining Table 30 (final score table for the BFSS in Example 6) and Table 31 (AN-PR of BFSS ) with Table 39 (added parameters’ score table), we get Table 42 and Table 43, respectively. From
Table 42 and
Table 43, we find that
and
, respectively. Hence, the ranks of decision choices are the same, but there is a little difference among decision choices within the range of
. Thus, AN-PR has the highest degree of the multiple use of reduction sets.
5. Application
To demonstrate our proposed techniques, they were applied to a practical application.
Let be a set of twelve investment avenues, where:
‘’ represents “Bank Deposits”,
‘’ represents “Insurance”,
‘’ represents “Foreign or Overseas Mutual Fund”,
‘’ represents “Bonds Offered by the Government and Corporates”,
‘’ represents “Equity Mutual Funds”,
‘’ represents “Precious Objects”,
‘’ represents “Postal Savings”,
‘’ represents “Shares and Stocks”,
‘’ represents “Employee Provident Fund”,
‘’ represents “Company Deposits”,
‘’ represents “Real Estate”,
‘’ represents “Money Market Instruments”,
and be a collection of parameters associated with the objects in O (’s are basically factors influencing investment decision), where:
‘’ denotes “Safety of Funds”,
‘’ denotes “Liquidity of Funds”,
‘’ denotes “State Policy”,
‘’ denotes “Maximum Profit in Minimum Period”,
‘’ denotes “Stable Return”,
‘’ denotes “Easy Accessibility”,
‘’ denotes “Tax Concession”,
‘’ denotes “Minimum Risk of Possession”,
‘’ denotes “Political Climate”,
‘’ denotes “Level of Income”.
An investor
Z wants to invest in a most suitable investment avenue from the above-mentioned investment avenues. The information between the investment avenues and influenced factors is given in the form of a BFSS
, which is given by
Table 44.
By using (
2) and (
3), the score of the positive
and negative
membership degrees for
and
are described by
Table 45 and
Table 46, respectively.
Now, by using Definition 9, the tabular arrangement for the score of membership degrees
where
and
of BFSS
is given by
Table 47.
From
, the final score of each object
is given by
Table 48.
Clearly, is the maximum score for the object . Thus, the investment avenue, , namely real estate, is the best choice for the investor Z. Our proposed reduction algorithms were executed by the investment avenue dataset. Consequently, the parameter reduction sets were readily computed by OCB-PR, and the minimal reduction was (not all) that kept the optimal decision invariant. Regrettably, we obtained no parameter reduction through IRDCB-PR, AN-PR, and N-PR. This means that OCB-PR can be applied in many real-life decision-making situations as compared to IRDCB-PR, AN-PR, and N-PR.
6. Conclusions
Parameter reduction is one of the main issues in soft set modelization and its hybrid models, including fuzzy soft set theory. Parameter reduction preserves the decision by removing the irrelevant parameters. In this paper, a novel approach for decision-making based on BFSSs was introduced, and some decision-making problems were solved by this newly-proposed approach to prove its validity, including a decision-making problem presented in [
30]. It was also observed that the results were the same by applying this novel decision-making approach. Using this concept, four novel definitions of parameter reductions, namely, OCB-PR, IRDCB-PR, N-PR, and AN-PR, of BFSSs were presented and illustrated through examples. Due to the existence of bipolar information in many real-world problems, the newly-proposed decision-making method based on BFSSs and parameter reductions of BFSSs were very efficient approaches to solve such problems, when compared to some existing methods, including fuzzy soft sets [
32] and their parameter reduction [
33]. An algorithm for each parameter reduction approach was developed. Moreover, our proposed reduction methods were compared with respect to the theoretical and experimental points of view as displayed in
Table 49. Finally, an application was studied to show the feasibility of our proposed reduction algorithms. In the future, we expect to extend our research work to (1) parameter reduction of the Pythagorean fuzzy soft sets, (2) parameter reduction of the Pythagorean fuzzy bipolar soft sets, and (3) parameter reduction of
m-polar fuzzy soft sets.