Next Article in Journal
Optimal Coordination of Directional Overcurrent Relays Using an Innovative Fractional-Order Derivative War Algorithm
Previous Article in Journal
An Improved Numerical Scheme for 2D Nonlinear Time-Dependent Partial Integro-Differential Equations with Multi-Term Fractional Integral Items
Previous Article in Special Issue
Fractal Analysis of GPT-2 Token Embedding Spaces: Stability and Evolution of Correlation Dimension
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pythagorean Fuzzy Overlap Functions and Corresponding Fuzzy Rough Sets for Multi-Attribute Decision Making

School of Mathematics and Data Science, Shaanxi University of Science & Technology, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(3), 168; https://doi.org/10.3390/fractalfract9030168
Submission received: 28 November 2024 / Revised: 7 March 2025 / Accepted: 8 March 2025 / Published: 11 March 2025

Abstract

As a non-associative connective in fuzzy logic, the analysis and research of overlap functions have been extended to many generalized cases, such as interval-valued and intuitionistic fuzzy overlap functions (IFOFs). However, overlap functions face challenges in the Pythagorean fuzzy (PF) environment. This paper first extends overlap functions to the PF domain by proposing PF overlap functions (PFOFs), discussing their representable forms, and providing a general construction method. It then introduces a new PF similarity measure which addresses issues in existing measures (e.g., the inability to measure the similarity of certain PF numbers) and demonstrates its effectiveness through comparisons with other methods, using several examples in fractional form. Based on the proposed PFOFs and their induced residual implication, new generalized PF rough sets (PFRSs) are constructed, which extend the PFRS models. The relevant properties of their approximation operators are explored, and they are generalized to the dual-domain case. Due to the introduction of hesitation in IF and PF sets, the approximate accuracy of classical rough sets is no longer applicable. Therefore, a new PFRS approximate accuracy is developed which generalizes the approximate accuracy of classical rough sets and remains applicable to the classical case. Finally, three multi-criteria decision-making (MCDM) algorithms based on PF information are proposed, and their effectiveness and rationality are validated through examples, making them more flexible for solving MCDM problems in the PF environment.

1. Introduction

Fuzzy set theory and rough set theory are two effective tools for dealing with fuzzy and uncertain information. In 1965, L.A. Zadeh proposed the fuzzy set theory [1], marking the birth of fuzzy mathematics. The core of fuzzy mathematics is the fuzzy sets, which aims to imitate the fuzzy thinking of the human brain and provide effective ideas and methods for solving various practical problems, especially the processing problems of complex systems with human intervention. It has found extensive applications in fields such as healthcare diagnosis [2], system evaluation [3], decision making [4], and other domains. In 1982, Pawlak first gave the concept of rough set theory [5], where the objective is to classify known knowledge through the equivalence relations, construct upper and lower approximate operators, and use known knowledge to describe unknown knowledge. It has been effectively utilized in fields such as machine learning [6], pattern analysis [7], knowledge discovery [8], and various other domains. In 1990, based on fuzzy sets and rough sets, the scholars Dubois et al. proposed fuzzy rough sets (FRSs) [9], where they introduced fuzzy relations into rough sets, enriched the rough set models, and expanded the scope of application of rough sets. Currently, there are many research results on fuzzy rough sets [10,11,12,13,14].
In many practical fuzzy problems, the membership and non-membership of classical fuzzy sets are often insufficient to explain the relationship between objects and sets. For example, in simulated voting problems, in addition to approval and disapproval, there is also the operation of abstention to remain neutral. Obviously, it is inaccurate to use classical fuzzy sets to deal with such problems. Therefore, in 1986, Atanassov introduced the hesitation degree to solve such problems and proposed IF sets [15]. IF sets are characterized by three parameters: membership, non-membership, and hesitation degree. Various generalized IFRS models have been studied thus far [16,17,18,19,20,21]. With the continuous in-depth study of many problems, researchers have found that the condition in which the sum of membership and non-membership degrees is less than or equal to one has become increasingly restrictive in solving certain problems. In 2013, Yager et al. proposed PF sets [22], which relaxed the constraint from the sum of membership and non-membership being less than one to the sum of their squares being less than or equal to one, greatly expanding the applicability of fuzzy set theory. Subsequently, the PF numbers [23] and the PF relations [24] were proposed. Naturally, PF sets have been introduced into rough set theory. Zhang et al. [25] first gave the definition of PFRSs. Since then, various generalized PFRS models have been proposed [26,27,28,29]. However, generalized PFRS models based on overlapping functions remain unexplored.
The process of combining a large amount of data from many different sources into a representative value is called information aggregation, and the mathematical models used to perform information aggregation are called aggregation functions. Bustince et al. proposed overlap functions in 2010 [30]. As a binary continuous aggregation function, this function emerged from addressing problems in image processing and classification tasks. Compared with the classical aggregation operators, namely t-norms and t-conorms, the overlap function is free from the constraints of “one as the unit element” and “associativity”. It has made rapid progress in both theory and applications since its proposal. There are many generalized overlap functions, such as interval-valued overlap functions [31] and IFOFs. However, existing research results have limited applicability in the PF domain.
This paper is mainly driven by two research motivations. First, since overlap functions are difficult to work with in the PF environment, they are extended to the PF domain, and a special lattice-valued overlap function, the PFOF, is proposed. Second, generalized PFRS models based on overlap functions have not been studied; thus, PFOFs are combined with rough set theory, and PF rough approximation operators are constructed using PFOFs and their induced fuzzy implications, leading to the proposal of a generalized PFRS model. The relationship between the research content of this paper and existing models is shown in Figure 1 (The blue part in the Figure 1 is the research content of this paper). Compared with classical PFRSs, PFRSs based on PFOFs offer greater flexibility by allowing different PFOFs to be used. (This flexibility is specifically demonstrated in Section 6.2.4 through a comparison between generalized PFRSs and the classical case).
The rest of this paper is organized as follows. Section 2 introduces some fundamental definitions which have been studied. Section 3 proposes the concept of PFOFs, discusses their representable forms, and presents a general construction method for PFOFs. Section 4 introduces a similarity measure for PF sets based on fuzzy scoring functions and verifies its effectiveness and rationality through examples. Section 5 proposes a novel PFRS based on PFOFs and the associated fuzzy residual implications, discusses its relevant properties, generalizes it to the dual-domain case, and constructs an approximation accuracy measure for PFRSs. Section 6 presents three MCDM methods based on PF information and verifies the effectiveness and rationality of the algorithm through examples. Finally, Section 7 summarizes the entire paper.

2. Fundamental Definitions

In this section, we go over several fundamental definitions related to fuzzy sets, overlap functions, and rough sets.

2.1. Fuzzy Sets

To address the ambiguity in concept extension and the uncertainty regarding whether a concept belongs to one category or another, American cybernetics expert L. A. Zadeh proposed the theory of fuzzy sets. Unlike classical set theory, which classifies elements as either belonging or not belonging to a set, fuzzy set theory innovatively introduces the concept of partial membership between elements and sets.
Definition 1
([1]). Let U be a domain and μ : U 0 , 1 denote the mapping of elements of domain U to 0 , 1 . Then, we define a fuzzy set M on domain U as follows:
M = { ( m , μ F ( m ) ) | m U } ,
where μ F ( m ) denotes the the degree to which m belongs to M and satisfies 0 μ F ( m ) 1 .
Atanassov introduced the hesitation degree on the basis of the traditional fuzzy sets and described uncertain information in terms of three aspects: membership, non-membership, and hesitation.
Definition 2
([15]). Let U be a domain and μ I , v I : U [ 0 , 1 ] denote the mappings of the elements of domain U to [ 0 , 1 ] . Then, we define an IF set Z on domain U as follows:
Z = { ( z , μ I ( z ) , v I ( z ) , π I ( z ) ) | z U } ,
where μ I ( z ) denotes the degree to which z belongs to Z, v I ( z ) denotes the the degree to which z does not belong to Z which satisfies 0 μ I ( z ) + v I ( z ) 1 , π I ( z ) = 1 μ I ( z ) v I ( z ) , which denotes the hesitation of z concerning Z.
Yager proposed PF sets based on IF sets, relaxing the restrictions on membership and non-membership and allowing evaluators to assign values within a wider range, thereby more accurately describing the uncertainty of things and better reflecting reality.
Definition 3
([22]). Let U be a domain and μ P , v P : U [ 0 , 1 ] denote the mappings of the elements of domain U to [ 0 , 1 ] . Then, we define a PF set B on domain U as follows:
B = { ( b , μ P ( b ) , v P ( b ) , π P ( b ) ) | b U } ,
where μ P ( b ) denotes the degree to which b belongs to B, v P ( b ) denotes the degree to which b does not belong to B which satisfies 0 μ P 2 ( b ) + v P 2 ( b ) 1 , π P ( b ) = 1 μ P 2 ( b ) ν P 2 ( b ) , which denotes the hesitation of b concerning B.
Remark 1.
(1) The difference between classical sets and fuzzy sets
The boundaries of classical sets are clear. There is no element that belongs to a set and does not belong to it. For example, for a domain u = { a , b , c } , where a and b are even numbers and c is an odd number, the even-number set on the domain u is { a , b } , and the odd-number set is { c } . There is no element which is both even and odd.
The boundary of a fuzzy set is fuzzy and uncertain. An element can partially belong to a set, with the degree of membership indicating how much an element belongs to the set. This degree typically ranges between [ 0 , 1 ] . For example, the concept of height is fuzzy. If 60% of people think that a height of 180 cm is tall, then we can use 0.6 to represent the degree to which 180 cm belongs to the set of tall people. If, for a fuzzy set on the domain U, the degree of membership of each element is either zero or one, then it degenerates into a classical set. Therefore, classical sets are a special case of fuzzy sets.
(2) The difference between classical fuzzy sets, IF sets, and PF sets
For some practical problems, it is difficult to handle them solely with classical fuzzy sets. For example, in a voting problem, if 10 people vote for an activity, with 4 in favor, 3 against, and 3 unable to participate for some reason, then using a classical fuzzy set to represent support for the activity may not work well. If the membership function μ represents support for the activity, then μ = 0.4 , and since the sum of membership and non-membership in the classical fuzzy set is one, only the non-membership degree v can represent the degree of non-voting and opposition (i.e., v = 0.6 ), which leads to information loss. Therefore, the IF set introduces the hesitation degree π to solve this problem. IF sets satisfy μ + v + π = 1 , where μ is the degree of membership, v is the degree of non-membership, and π is the hesitation, representing the uncertainty in the membership relationship. PF sets, based on IF sets, extend the condition μ + v + π = 1 to μ 2 + v 2 + π = 1 , expanding the range which IF sets can represent and enhancing their applicability.

2.2. Overlap Functions

Definition 4
([30]). A binary function O : [ 0 , 1 ] 2 [ 0 , 1 ] is called an overlap function; if a , b , c [ 0 , 1 ] , then it satisfies the following conditions:
a. 
Commutative: O ( a , b ) = O ( b , a ) ;
b. 
Boundary condition: O ( a , b ) = 0 a b = 0 ;
c. 
Boundary condition: O ( a , b ) = 1 a b = 1 ;
d. 
Monotonicity: If b c , then O ( a , b ) O ( a , c ) ;
e. 
Continuity.
If O satisfies O ( a , 1 ) a for all a [ 0 , 1 ] , then O is called one-section deflation; if O satisfies O ( a , 1 ) a for all a [ 0 , 1 ] , then O is called one-section inflation.
Definition 5
([32]). We define the function R O : [ 0 , 1 ] 2 [ 0 , 1 ] as follows:
R O ( a , b ) = m a x { c [ 0 , 1 ] | O ( a , c ) b } ,
where O is an overlap function. Then, we call the function R O a residual implication induced by O.

2.3. Rough Sets

Definition 6
([5]). Let U be a domain. Any subset of U × U = { ( c 1 , c 2 ) | c 1 , c 2 U } is called a binary relation on the domain U.
Let R be a binary relation on U. R is called reflexive if R satisfies c U , ( c , c ) R . R is called symmetric if R satisfies c 1 , c 2 U , if ( c 1 , c 2 ) R , then ( c 2 , c 1 ) R . R is called transitive if R satisfies c 1 , c 2 , c 3 U , if ( c 1 , c 2 ) R and ( c 2 , c 3 ) R , then ( c 1 , c 3 ) R .
R on the domain U which satisfies the reflexive, symmetric, and transitive relations is called an equivalence relation on U. ( U , R ) is called a Pawlak approximate space.
Definition 7
([5]). Let ( U , R ) be a Pawlak approximation space, where U is a domain and R is an equivalence relation on U. C is a subset of the domain U. Then, the rough approximation operators of C are defined as follows:
R C = c U { [ c ] R | [ c ] R C } , R C = c U { [ c ] R | [ c ] R C } ,
where [ c ] R denotes the equivalence class of c under the equivalence relation R.
Remark 2.
We use Figure 2 to illustrate the difference between rough sets and classical sets.
We regard each square in Figure 2a as an element, and all the squares together form a domain. The blue box represents a set. As shown in the figure, the blue box contains the four elements in the center of the domain. Therefore, the set is described by the elements in the domain, making it a classical set.
From Figure 2b, we can see that the blue box not only completely contains the four elements in the center of the domain but also partially contains other elements. This scenario extends beyond the scope of classical sets. We use rough sets to represent this situation. The gray area in the figure represents the four elements which are completely contained, corresponding to the lower approximation of the set. The entire domain represents the upper approximation of the set, which includes both the gray and blue areas. Through the upper and lower approximations, we can represent undefined or uncertain sets.
Definition 8
([5]). Let ( U , R ) be an approximation space C U :
(1) 
The approximation accuracy of set C is defined as follows:
α R ( C ) = | R C | | R C | ,
where | C | denotes the cardinality of the set C. It is stipulated that α R ( ) = 1 .
(2) 
The roughness of set X is defined as follows:
ρ R ( C ) = 1 α R ( C ) .
Definition 9
([9]). Let R F be a fuzzy relation on U. R F is called reflexive if R F satisfies m U , R F ( m , m ) = 1 . R F is called symmetric if R F satisfies m 1 , m 2 U , R F ( m 1 , m 2 ) = R F ( m 2 , m 1 ) . R F is called transitive if R F satisfies m 1 , m 2 , m 3 U , R F ( m 1 , m 2 ) R F ( m 2 , m 3 ) R F ( m 1 , m 3 ) .
R F on the domain U which satisfies the reflexive, symmetric, and transitive relations is called a fuzzy equivalence relation on U.
Definition 10
([9]). Let ( U , R F ) be a fuzzy approximation space, where U is a domain and R F is a fuzzy binary relation on U. M is a fuzzy set on U. Then, the fuzzy rough approximation operators of M are defined as follows:
R F M ( m 1 ) = m 2 U ( M ( m 2 ) ( 1 R F ( m 1 , m 2 ) ) ) , R F M ( m 1 ) = m 2 U ( M ( m 2 ) R F ( m 1 , m 2 ) ) ,
If the fuzzy set M satisfies R F M R F M , then M is called undefinable, and the pair ( R F M , R F M ) is called an FRS.
Remark 3.
If certain factors in classical rough sets are fuzzified, such as replacing the equivalence relation with a fuzzy one or replacing classical sets with fuzzy sets (both can be performed simultaneously), then classical rough sets become fuzzy rough sets. Therefore, fuzzy rough sets generalize classical rough sets.

3. PF Overlap Functions

In this section, we first give the definition of PFOFs, introduce the concept of representable PFOFs, and finally present a general method for generating PFOFs.
Definition 11.
Let L P be the set of all pairs a = ( a 1 , a 2 ) , where a , b [ 0 , 1 ] and a 1 2 + a 2 2 1 . We define a partial order P on L P as follows. For any a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P , a P b if and only if a 1 b 1 and a 2 b 2 .
Theorem 1.
The partially ordered set L P is a complete lattice, that is, for any subset S L P , the supremum s u p ( S ) and infimum i n f ( S ) exist in L P .
Proof. 
Let S L P be an arbitrary subset. We define
A = { a [ 0 , 1 ] b [ 0 , 1 ] s . t . ( a , b ) S } , B = { b [ 0 , 1 ] a [ 0 , 1 ] s . t . ( a , b ) S } .
Since A , B [ 0 , 1 ] , let a * = s u p ( A ) , b * = i n f ( B ) . We claim that ( a * , b * ) L P and that it serves as the lowest upper bound of S in the order L P .
Claim 1.
( a * ) 2 + ( b * ) 2 1 .
Take any x A . By the definition of A, there exists y B such that ( x , y ) S . Since S L P , we know x 2 + y 2 1 , and hence y 1 x 2 . But b * is the infimum of all such y B , and thus for every x A , we have b * y 1 x 2 . In particular, since x can be arbitrarily chosen to be close to a * , letting x a * yields b * 1 ( a * ) 2 . When squaring both sides, ( a * ) 2 + ( b * ) 2 1 , and thus ( a * , b * ) L P .
Claim 2.
( a * , b * )  is an upper bound of S.
Take any ( x , y ) S . With the construction of A and B, we have x A , y B , and thus x a * , y b * . Recall that P is defined by ( x 1 , x 2 ) P ( y 1 , y 2 ) if and only if x 1 y 1 and x 2 y 2 , and hence ( x , y ) P ( a * , b * ) . Thus, ( a * , b * ) is an upper bound of S under P .
Claim 3.
( a * , b * )  is the lowest upper bound of S under  P .
Suppose ( u , v ) is any other upper bound of S. Then, for every ( x , y ) S , we have ( x , y ) P ( u , v ) , which means x u and y v . In particular, a * = s u p ( A ) u , b * = i n f ( B ) v . Thus, we have ( a * , b * ) P ( u , v ) , showing that ( a * , b * ) is the lowest upper bound of S in P .
Under Claims 1–3, the element ( a * , b * ) is precisely s u p ( S ) in ( L P , P ) .
A completely analogous argument (reversing the roles of ≤ and ≥ or with a “dual” proof) shows that S also has an infimum in L P . Consequently, ( L P , P ) is a complete lattice. □
Definition 12.
A binary function P O : L P 2 L P is called a PFOF if for any a , b , c L P , it satisfies the following:
a. 
Commutativity: P O ( a , b ) = P O ( b , a ) ;
b. 
Boundary condition: P O ( a , b ) = ( 0 , 1 ) a or b = ( 0 , 1 ) ;
c. 
Boundary condition: P O ( a , b ) = ( 1 , 0 ) a = b = ( 1 , 0 ) ;
d. 
Monotonicity: P O ( a , b ) P P O ( a , c ) i f b P c ;
e. 
Continuity: i I , b i L P , P O ( a , P i I b i ) = P i I P O ( a , b i ) , P O ( a , P i I b i ) = P i I P O ( a , b i ) , where P , P are the supremum and infimum operators of ( L p , p ) , respectively.
If P O satisfies P O ( a , ( 1 , 0 ) ) P a for all a L P , then P O is called one-section deflation, and if P O satisfies P O ( a , ( 1 , 0 ) ) P a for all a L P , then P O is called one-section inflation.
Proposition 1.
For any a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P , we define the function P O as follows:
P O ( a , b ) = ( O ( a 1 2 , b 1 2 ) , 1 O ( 1 a 2 2 , 1 b 2 2 ) ) ,
where O is an overlap function. Then, we call P O a PFOF, P O satisfies one-section deflation if O satisfies one-section deflation, and P O satisfies one-section inflation if O satisfies one-section inflation.
Proof. 
(a)
Because of the commutativity of O, for all a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P , we have
P O ( a , b ) = ( O ( a 1 2 , b 1 2 ) , 1 O ( 1 a 2 2 , 1 b 2 2 ) ) = ( O ( b 1 2 , a 1 2 ) , 1 O ( 1 b 2 2 , 1 a 2 2 ) ) = P O ( b , a ) ;
(b)
According to Definition 11 and the boundary condition of O, we can obtain P O ( a , b ) = ( 0 , 1 ) O ( a 1 2 , b 1 2 ) = 0 , 1 O ( 1 a 2 2 , 1 b 2 2 ) = 1 O ( a 1 2 , b 1 2 ) = 0 , O ( 1 a 2 2 , 1 b 2 2 ) = 0 a 1 2 b 1 2 = 0 , a 2 2 b 2 2 = 1 a = ( 0 , 1 ) or b = ( 0 , 1 ) ;
(c)
According to the boundary condition of O, we have P O ( a , b ) = ( 1 , 0 ) O ( a 1 2 , b 1 2 ) = 1 , 1 O ( 1 a 2 2 , 1 b 2 2 ) = 0 O ( a 1 2 , b 1 2 ) = 1 , O ( 1 a 2 2 , 1 b 2 2 ) = 1 a 1 2 b 1 2 = 1 , a 2 2 b 2 2 = 0 a = b = ( 1 , 0 ) ;
(d)
If b P c , then according to Definition 11, we can obtain b 1 c 1 , b 2 c 2 , and thus we have b 1 2 c 1 2 , 1 b 2 2 1 c 2 2 . By letting P O ( a , b ) = ( u 1 , u 2 ) , P O ( a , c ) = ( v 1 , v 2 ) , we have u 1 = O ( a 1 2 , b 1 2 ) , u 2 = 1 O ( 1 a 2 2 , 1 b 2 2 ) and v 1 = O ( a 1 2 , c 1 2 ) , v 2 = 1 O ( 1 a 2 2 , 1 c 2 2 ) . From the monotonicity of the overlap function, we know that O ( a 1 2 , b 1 2 ) O ( a 1 2 , c 1 2 ) , O ( 1 a 2 2 , 1 b 2 2 ) O ( 1 a 2 2 , 1 c 2 2 ) , and thus there is u 1 v 1 , u 2 v 2 ; that is, P O ( a , b ) P P O ( a , c ) ;
(e)
Firstly, we prove i I , b i L P , P O ( a , P i I b i ) = P i I P O ( a , b i ) . Because of the continuity of the overlap function, we have
O ( a 1 2 , i I b i 1 2 ) = i I O ( a 1 2 , b i 1 2 ) , O ( 1 a 2 2 , 1 i I b i 2 2 ) = O ( 1 a 2 2 , i I ( 1 b i 2 2 ) ) = i I O ( 1 a 2 2 , 1 b i 2 2 ) ,
Then, we have
P O ( a , P i I b i ) = ( O ( a 1 2 , i I b i 1 2 ) , 1 O ( 1 a 2 2 , 1 i I b i 2 2 ) ) = ( O ( a 1 2 , i I b i 1 2 ) , 1 O ( 1 a 2 2 , i I ( 1 b i 2 2 ) ) ) = ( i I O ( a 1 2 , b i 1 2 ) , 1 i I O ( 1 a 2 2 , 1 b i 2 2 ) ) = ( i I O ( a 1 2 , b i 1 2 ) , i I 1 O ( 1 a 2 2 , 1 b i 2 2 ) ) = P i I ( O ( a 1 2 , b i 1 2 ) , 1 O ( 1 a 2 2 , 1 b i 2 2 ) ) = P i I P O ( a , b i ) .
Similarly, we can obtain i I , b i L P , P O ( a , P i I b i ) = P i I P O ( a , b i ) . Therefore, P O ( a , b ) is continuous.
(f)
For P O ( ( 1 , 0 ) , a ) = ( O ( 1 , a 1 2 ) , 1 O ( 1 , 1 a 2 2 ) ) , if O satisfies one-section deflation, then O ( 1 , a 1 2 ) a 1 , 1 O ( 1 , 1 a 2 2 ) a 2 , and thus P O ( ( 1 , 0 ) , a ) P a , P O satisfies one-section deflation. Similarly, if O satisfies one-section inflation, then P O satisfies one-section inflation.
Example 1.
(1) 
Given O ( a , b ) = a b ( a + b ) 2 , then
P O ( a , b ) = ( a 1 2 b 1 2 ( a 1 2 + b 1 2 ) 2 , 1 ( 1 a 2 2 ) ( 1 b 2 2 ) ( 2 a 2 2 b 2 2 ) 2 )
is a PFOF;
(2) 
Given O ( a , b ) = m i n ( a , b ) , then
P O ( a , b ) = ( m i n ( a 1 2 , b 1 2 ) , 1 m i n ( 1 a 2 2 , 1 b 2 2 ) )
is a PFOF.
Definition 13.
PFOF is representable if a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P , and it can be expressed as P O ( a , b ) = ( O ( a 1 2 , b 1 2 ) , 1 O ( 1 a 2 2 , 1 b 2 2 ) ) , where O is an overlap function.
Remark 4.
We can break down the expressible form of the PFOF into two parts for clarity. The first part, O ( a 1 2 , b 1 2 ) , represents the aggregation of the membership part of the PF fuzzy numbers, where O is the overlap function, and a 1 and b 1 represent the membership values of the two PF fuzzy numbers. The second part, 1 O ( 1 a 2 2 , 1 b 2 2 ) , is the dual of the first part, except that a 2 and b 2 represent the non-membership values of the two PF fuzzy numbers. This part aggregates the non-membership values of the two PF fuzzy numbers.
Example 2.
(1) 
The PFOFs in Example 1 are all representable PFOFs;
(2) 
P O ( a , b ) = ( m i n ( 1 , a 1 2 b 1 2 , ( 1 a 2 2 ) ( 1 b 2 2 ) , 1 m i n ( 1 , a 1 2 b 1 2 , ( 1 a 2 2 ) ( 1 b 2 2 ) ) is an unrepresentable PFOF.
We can easily find that P O is a PFOF (it satisfies the conditions of Definition 12). Let O ( a 1 , b 1 ) = m i n ( 1 , a 1 b 1 , ( 1 a 2 ) ( 1 b 2 ) ) . There are four variables in the function O ( a 1 , b 1 ) , namely a 1 , b 1 , a 2 , and b 2 , where if a 1 = b 1 = 1 , we have O ( 1 , 1 ) = m i n ( 1 , 1 , ( 1 a 2 ) ( 1 b 2 ) ) . The value of the function O ( 1 , 1 ) also depends on a 2 , b 2 . If a 2 b 2 0 , then we can find O ( 1 , 1 ) 1 , and O ( a 1 , b 1 ) does not satisfy the condition of the overlap function. It is proven that P O ( a , b ) is an unrepresentable PFOF.
Proposition 2.
A binary function P O : L P 2 L P is a PFOF if there are two functions u ( a , b ) and v ( a , b ) : [ 0 , 1 ] 2 [ 0 , 1 ] such that a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P :
P O ( a , b ) = ( u ( a 1 2 , b 1 2 ) u ( a 1 2 , b 1 2 ) + v ( a 1 2 , b 1 2 ) , 1 u ( 1 a 2 2 , 1 b 2 2 ) u ( 1 a 2 2 , 1 b 2 2 ) + v ( 1 a 2 2 , 1 b 2 2 ) ) ,
where u ( a , b ) , v ( a , b ) satisfies the following:
a. 
u ( a , b ) and v ( a , b ) are symmetric;
b. 
u is non-decreasing and v is non-increasing;
c. 
u ( a , b ) = 0 a b = 0 ;
d. 
v ( a , b ) = 0 a b = 1 ;
e. 
u and v are continuous.
Example 3.
Let u ( a , b ) = a 2 b 2 v ( a , b ) = 1 a 2 b 2 . Then, P O ( a , b ) = ( a 1 2 b 1 2 , 1 ( 1 a 2 2 ) 2 ( 1 b 2 2 ) 2 ) is a PFOF.

4. Similarity Measure of PF Sets Based on Fuzzy Score Functions

In this section, we propose a new similarity measure for PF sets based on fuzzy score functions and use several examples to compare it with existing methods, demonstrating its effectiveness and rationality.

4.1. A New Similarity Measure for PF Sets

Before introducing the similarity measure, we first give the concept of fuzzy score functions.
Definition 14.
A binary function FS : [ 0 , 1 ] 2 [ 0 , 1 ] is called a fuzzy score function if a , b , c , d [ 0 , 1 ] and it satisfies the following:
a. 
Boundary condition: FS ( a , b ) = 0 a = 0 and b = 1 ;
b. 
Boundary condition: FS ( a , b ) = 1 a = 1 and b = 0 ;
c. 
Monotonicity: FS ( a , b ) FS ( c , d ) a c and b d ;
d. 
Continuity.
Example 4.
We define functions as follows:
(1) 
FS ( a , b ) = ( 3 2 ) a b 2 1 2 ;
(2) 
FS ( a , b ) = ( 19 10 ) a + ( 9 10 ) b 19 10 ;
(3) 
FS ( a , b ) = a 2 b 2 + 1 2 .
It is straightforward to confirm that the functions above are all fuzzy score functions. Figure 3 shows images with different FS values.
Remark 5.
We apply the fuzzy score functions to calculate the score value of the PF numbers, where for any a = ( a 1 , a 2 ) , b = ( b 1 , b 2 ) L P , from the definition of the fuzzy score functions, we can clearly find the following:
a. 
FS ( a 1 , a 2 ) = 0 a = ( 0 , 1 ) ;
b. 
FS ( a 1 , a 2 ) = 1 a = ( 1 , 0 ) ;
c. 
FS ( a 1 , a 2 ) FS ( b 1 , b 2 ) a P b .
Then, we present a new PF set similarity measure based on fuzzy score functions. First, we introduce the definition of a generalized PF set similarity measure.
Definition 15
([33]). Let U be the domain. P F ( U ) denotes all PF sets on the domain U, and the mapping S : P F ( U ) 2 [ 0 , 1 ] is called a similarity measure of the PF set if B 1 , B 2 , B 3 P F ( U ) and thus satisfies the following:
a. 
0 S ( B 1 , B 2 ) 1 ;
b. 
S ( B 1 , B 2 ) = S ( B 2 , B 1 ) ;
c. 
S ( B 1 , B 2 ) = 1 B 1 = B 2 ;
d. 
If B 1 P B 2 P B 3 , then S ( B 1 , B 3 ) m i n ( S ( B 1 , B 2 ) , S ( B 2 , B 3 ) ) , where P is the inclusion operator of the PF sets.
Definition 16.
Let U ( b 1 , b 2 , , b n ) be a domain and B 1 , B 2 P F ( U ) , FS : [ 0 , 1 ] 2 [ 0 , 1 ] be the fuzzy score functions. Then, we define the following mapping S : P F ( U ) 2 [ 0 , 1 ] as the PF sets similarity measure:
S ( B 1 , B 2 ) = i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n
The following conditions are stipulated:
(1) 
S ( , ) = 1 ;
(2) 
If FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) = FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) = 0 , then
m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) = 1 .
Remark 6.
We will analyze the PF set similarity measure in Definition 16 from the inside out. First, FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) and FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) represent the results of the fuzzy scoring function on the ith element in PF sets B 1 and B 2 , respectively, which can be understood as their fuzzy scoring values. Therefore, m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) represents the ratio of the smaller number to the larger number in the fuzzy scoring values of the ith element in PF sets B 1 and B 2 , which can reflect the similarity between the ith element in PF sets B 1 and B 2 . i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n represents the average of all elements in PF sets B 1 and B 2 after the above operation, which is used to reflect the overall similarity between PF sets B 1 and B 2 .
Theorem 2.
The PF set similarity measure proposed in Definition 16 satisfies all the conditions of the generalized PF set similarity measure in Definition 15.
Proof. 
(a) If B 1 = B 2 = , S ( , ) = 1 , if B 1 and B 2 are not all ∅, then because 0 FS 1 , i I ( 1 , 2 , , n ) , we have
0 m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) 1 ,
Then, we have
0 m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) 1 ,
We can obtain
0 i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n ,
and therefore, there is
0 i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n = S ( B 1 , B 2 ) 1 ;
(b)
S ( B 1 , B 2 ) = i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n = i = 1 n m i n ( FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) , FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) m a x ( FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) , FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) n = S ( B 2 , B 1 ) ;
(c) If B 1 = B 2 , i I , we can obtain m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) = m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) , and thus we have
S ( B 1 , B 2 ) = i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n = n n = 1 ;
By definition, if B 1 = B 2 = , then S ( B 1 , B 2 ) = 1 .
If B 1 B 2 , according to the monotonicity of FS , for some i I s u b I , we can find that
0 m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) < m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) 1 ,
Then, for i I , there is
0 i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) < n ,
and furthermore
0 i = 1 n m i n ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) m a x ( FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) , FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) n = S ( B 1 , B 2 ) < 1 ,
Thus, we find that if B 1 B 2 , S ( B 1 , B 2 ) 1 , hence S ( B 1 , B 2 ) = 1 B 1 = B 2 ;
(d) If B 1 P B 2 P B 3 , i I , we obtain μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) μ P ( B 3 ( b i ) ) , v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) v P ( B 3 ( b i ) ) , and according to the monotonicity of FS , we have FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) ) FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) ) FS ( μ P ( B 3 ( b i ) ) ,   v P ( B 3 ( b i ) ) ) , and thus there is
S ( B 1 , B 3 ) = i = 1 n FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) FS ( μ P ( B 3 ( b i ) ) , v P ( B 3 ( b i ) ) n , S ( B 1 , B 2 ) = i = 1 n FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) n , S ( B 2 , B 3 ) = i = 1 n FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) FS ( μ P ( B 3 ( b i ) ) , v P ( B 3 ( b i ) ) n ,
We can obtain
S ( B 1 , B 3 ) = i = 1 n FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) FS ( μ P ( B 3 ( b i ) ) , v P ( B 3 ( b i ) ) n i = 1 n FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) n = S ( B 1 , B 2 ) , S ( B 1 , B 3 ) = i = 1 n FS ( μ P ( B 1 ( b i ) ) , v P ( B 1 ( b i ) ) FS ( μ P ( B 3 ( b i ) ) , v P ( B 3 ( b i ) ) n i = 1 n FS ( μ P ( B 2 ( b i ) ) , v P ( B 2 ( b i ) ) FS ( μ P ( B 3 ( b i ) ) , v P ( B 3 ( b i ) ) n = S ( B 2 , B 3 ) ,
Therefore, S ( B 1 , B 3 ) m i n ( S ( B 1 , B 2 ) , S ( B 2 , B 3 ) ) . □

4.2. Numerical Experiments

1. The numerical values in this section were derived from [34]. The FS in S is taken from Example 4(1). Table 1 presents the existing similarity measures, while Table 2 compares the numerical results obtained from the similarity measure in Definition 16 with those from existing methods.
From the numerical calculation results, we observe that certain similarity measures exhibit the following issues:
(1)
In Case 1, the calculated values of S C and S P are both one, yet b 1 b 2 , which is unreasonable. Additionally, the similarity of S Y with respect to the PF number ( 0 , 0 ) cannot be calculated.
(2)
In both Case 1 and Case 2, while b 2 is the same, b 1 differs, yet the calculated result of S H remains the same.
(3)
In both Case 3 and Case 4, b 2 is the same, but b 1 differs, and the calculated results for S H Y , S H , S C , S L , and S Q are identical.
These issues do not arise in the method we propose.
2. The following PF sets are given:
B 1 = ( 0.3 , 0.3 ) b 1 + ( 0.4 , 0.4 ) b 2 + ( 0.4 , 0.4 ) b 3 + ( 0.4 , 0.4 ) b 4 ; B 2 = ( 0.5 , 0.5 ) b 1 + ( 0.1 , 0.1 ) b 2 + ( 0.5 , 0.5 ) b 3 + ( 0.1 , 0.1 ) b 4 ; B 3 = ( 0.5 , 0.4 ) b 1 + ( 0.4 , 0.5 ) b 2 + ( 0.3 , 0.3 ) b 3 + ( 0.2 , 0.2 ) b 4 ; B 4 = ( 0.4 , 0.4 ) b 1 + ( 0.5 , 0.5 ) b 2 + ( 0.2 , 0.2 ) b 3 + ( 0.3 , 0.3 ) b 4 ;
Here, FS in S is taken from Example 4(2). Table 3 shows the similarity between B and other PF sets calculated by different similarity measures.
From Table 3, we can see that except for S Y , S C , and S P , the calculation results of other methods are all S ( B 1 , B 4 ) < S ( B 2 , B 4 ) < S ( B 3 , B 4 ) .
Through the two experiments, it is shown that the PF similarity measure we proposed has certain effectiveness and rationality. Since the fuzzy scoring function is a family of functions, the similarity measure based on the fuzzy scoring function proposed in this paper is also a family of functions. Different fuzzy scoring functions can be selected for different problems, which has greater flexibility than existing methods.

5. PFRSs Based on PFOFs

In this section, we introduce a novel PFRS model derived from PFOFs, discuss its related properties, and extend it to the dual domain. Finally, we propose a new approximation accuracy measure for PFRSs.
Definition 17.
We define the function R P O : L P 2 L P as
R P O ( a , b ) = s u p { c L P : P O ( a , c ) P b }
where P O is a PFOF. Then, we call the function R P O a residual implication induced by P O .
Example 5.
(1) 
Given P O ( a , b ) = ( a 1 b 1 , 1 ( 1 a 2 2 ) ( 1 b 2 2 ) ) , we can find the induced residual implication as follows:
( 1 , 0 ) a 1 b 1 a n d a 2 b 2 ( 1 b 2 2 1 a 2 2 , 1 1 b 2 2 1 a 2 2 ) a 1 b 1 a n d a 2 < b 2 ( b 1 a 1 , 0 ) a 1 > b 1 a n d a 2 b 2 ( b 1 a 1 , 1 1 b 2 2 1 a 2 2 ) a 1 > b 1 a n d a 2 < b 2 a n d ( b 1 a 1 ) 2 1 b 2 2 1 a 2 2 ( 1 b 2 2 1 a 2 2 , 1 1 b 2 2 1 a 2 2 ) a 1 > b 1 a n d a 2 < b 2 a n d ( b 1 a 1 ) 2 > 1 b 2 2 1 a 2 2
(2) 
Given P O ( a , b ) = ( m i n ( a 1 , b 1 ) , 1 m i n ( 1 a 2 2 , 1 b 2 2 ) ) , we can find the induced residual implication as follows:
( 1 , 0 ) a 1 b 1 a n d 1 1 a 2 2 b 2 ( 1 b 2 2 , 1 ( 1 b 2 2 ) 2 ) a 1 b 1 a n d 1 1 a 2 2 < b 2 ( b 1 2 , 0 ) a 1 > b 1 a n d 1 1 a 2 2 b 2 ( b 1 2 , 1 ( 1 b 2 2 ) 2 ) a 1 > b 1 a n d 1 1 a 2 2 < b 2 a n d b 1 2 1 b 2 2 ( 1 b 2 2 , 1 ( 1 b 2 2 ) 2 ) a 1 > b 1 a n d 1 1 a 2 2 < b 2 a n d b 1 2 > 1 b 2 2
Definition 18
([42]). Let U and W be two domains. A PF set R P d = { ( b 1 , b 2 ) , μ P ( b 1 , b 2 ) , v P ( b 1 , b 2 ) | ( b 1 , b 2 ) U × W } is called a PF relation on the dual domain U × W . For every ( b 1 , b 2 ) U × W , μ P ( b 1 , b 2 ) represents the degree to which the elements b 1 U and b 2 W have the relation R P d , v P ( b 1 , b 2 ) represents the degree to which the elements b 1 U and b 2 W do not have the relation R P d , and the set of all PF relations on U × V is denoted by P F R ( U × V ) . If U = W , then R P is called a PF relation on U.
The PF relation R P on the domain U is called reflexive if b U , it satisfies R P ( b , b ) = ( 1 , 0 ) ; R P is called symmetric if b 1 , b 2 U , it satisfies R P ( b 1 , b 2 ) = R P ( b 2 , b 1 ) ; and R P is called transitive if b 1 , b 2 , b 3 U , it satisfies μ P ( b 1 , b 3 ) b 2 U ( μ P ( b 1 , b 2 ) μ P ( b 2 , b 3 ) ) 2 , v P ( b 1 , b 3 ) b 2 U ( v P ( b 1 , b 2 ) v P ( b 2 , b 3 ) ) 2 .
Definition 19.
Let ( U , R P ) be a PF approximation space, U be a domain, and R P be a PF relation on the domain U. B P F ( U ) . We denote the PF rough approximation operators of B by R P R P O B , R P P O B as follows:
R P R P O B ( b 1 ) = P b 2 U R P O ( R P ( b 1 , b 2 ) , B ( b 2 ) ) , R P P O B ( b 1 ) = P b 2 U P O ( R P ( b 1 , b 2 ) , B ( b 2 ) ) ,
where P O is a PFOF and R P O is the residual implication induced by P O . If R P R P O B R P P O B , then B is called undefinable, and the pair ( R P R P O B , R P P O B ) is called a PFRS.
Example 6.
Let ( U , R P ) be a PF approximation space and the domain U = ( b 1 , b 2 , b 3 , b 4 ) . Table 4 is the PF relation matrix, where B = ( 0.80 , 0.50 ) b 1 + ( 0.70 , 0.60 ) b 2 + ( 0.60 , 0.20 ) b 3 + ( 0.90 , 0.40 ) b 4 .
When taking R P O and P O in Example 5(2), according to Definition 19, we can obtain
R P R P O B = ( 0.3600 , 0.7684 ) b 1 + ( 0.3600 , 0.7684 ) b 2 + ( 0.3600 , 0.7684 ) b 3 + ( 0.4900 , 0.7684 ) b 4 , R P P O B = ( 0.8944 , 0.1421 ) b 1 + ( 0.8944 , 0.2889 ) b 2 + ( 0.8367 , 0.1421 ) b 3 + ( 0.9487 , 0.2889 ) b 4 .
Theorem 3.
Let ( U , R P ) be the PF approximation space B 1 , B 2 P F ( U ) . The PF rough approximation operators in Definition 19 satisfy the following:
(1) 
R P R P O U = U ;
(2) 
R P P O = ;
(3) 
B 1 P B 2 R P R P O B 1 P R P R P O B 2 ; R P P O B 1 P R P P O B 2 ;
(4) 
R P R P O ( B 1 P B 2 ) = R P R P O B 1 P R P R P O B 2 , where P is the intersection operation of PF sets;
(5) 
R P P O ( B 1 P B 2 ) = R P P O B 1 P R P P O B 2 , where P is the union operation of PF sets;
(6) 
R P R P O ( B 1 P B 2 ) P R P R P O B 1 P R P R P O B 2 ;
(7) 
R P P O ( B 1 P B 2 ) P R P P O B 1 P R P P O B 2 .
Proof. 
(1)
R P R P O U = P b 2 U R P O ( R P ( b 1 , b 2 ) , U ( b 2 ) ) = P b 2 U R P O ( R P ( b 1 , b 2 ) , ( 1 , 0 ) ) = ( 1 , 0 ) = U ;
(2)
R P P O = P b 2 U P O ( R P ( b 1 , b 2 ) , ( b 2 ) ) = P b 2 U P O ( R P ( b 1 , b 2 ) , ( 0 , 1 ) ) = ( 0 , 1 ) = ;
(3) Since B 1 P B 2 , b 2 U , we have μ P ( B 1 ( b 2 ) ) μ P ( B 2 ( b 2 ) ) and v P ( B 1 ( b 2 ) ) v P ( B 2 ( b 2 ) ) , that is, B 1 ( b 2 ) P B 2 ( b 2 ) . Due to the monotonicity of P O , for any b 1 U , we have
R P P O B 1 ( b 1 ) = P b 2 U P O ( R P ( b 1 , b 2 ) , B 1 ( b 2 ) ) P P b 2 U P O ( R P ( b 1 , b 2 ) , B 2 ( b 2 ) ) = R P P O B 2 ( b 1 ) ,
Therefore, we can obtain R P P O B 1 P R P P O B 2 . Similarly, we can prove that R P R P O B 1 P R P R P O B 2 ;
(4)
R P R P O ( B 1 P B 2 ) ( b 1 ) = P b 2 U R P O ( R P ( b 1 , b 2 ) , ( B 1 P B 2 ) ( b 2 ) ) = P b 2 U R P O ( R P ( b 1 , b 2 ) , B 1 ( b 2 ) P B 2 ( b 2 ) ) = ( P b 2 U R P O ( R P ( b 1 , b 2 ) , B 1 ( b 2 ) ) ) P ( P b 2 U R P O ( R P ( b 1 , b 2 ) , B 2 ( b 2 ) ) ) = R P R P O B 1 ( b 1 ) P R P R P O B 2 ( b 1 ) ;
(5)
R P P O ( B 1 P B 2 ) ( b 1 ) = P b 2 U P O ( R P ( b 1 , b 2 ) , ( B P B 2 ) ( b 2 ) ) = P b 2 U P O ( R P ( b 1 , b 2 ) , B 1 ( b 2 ) P B 2 ( b 2 ) ) = ( P b 2 U P O ( R P ( b 1 , b 2 ) , B 1 ( b 2 ) ) ) P ( P b 2 U P O ( R P ( b 1 , b 2 ) , B 2 ( b 2 ) ) ) = R P P O B 1 ( b 1 ) P R P P O B 2 ( b 1 ) ;
(6, 7) These are direct consequences of (3). □
Theorem 4.
Let U be a domain R P 1 , R P 2 P F R ( U ) . If R P 1 P R P 2 , then for any B P F ( U ) , we have the following:
(1) 
R P 1 P O B P R P 2 P O B ;
(2) 
R P 1 R P O B P R P 2 R P O B .
Proof. 
(1)
Since R P 1 P R P 2 , b 1 , b 2 U , we have R P 1 ( b 1 , b 2 ) P R P 2 ( b 1 , b 2 ) , and because of the monotonicity of P O , we have
R P 1 P O B ( b 1 ) = P b 2 U P O ( R P 1 ( b 1 , b 2 ) , B ( b 2 ) ) P P b 2 U P O ( R P 2 ( b 1 , b 2 ) , B ( b 2 ) ) = R P 2 P O B ( b 1 ) ,
Therefore, we can find R P 1 P O B P R P 2 P O B ;
(2)
The proof method is similar to that in (1). We can easily obtain R P 1 R P O B P R P 2 R P O B .
We can also prove that the idempotence law of this model does not hold; that is, R P P O ( R P P O B ) R P P O B , R P R P O ( R P R P O B ) R P R P O B .
Example 7.
Let B be the PF set in Example 6, and let it be known that R P P O B = ( 0.8944 , 0.1421 ) b 1 + ( 0.8944 , 0.2889 ) b 2 + ( 0.8367 , 0.1421 ) b 3 + ( 0.9487 , 0.2889 ) b 4 . Then, we can obtain R P P O ( R P P O B ) = ( 0.9457 , 0.1008 ) b 1 + ( 0.9457 , 0.2065 ) b 2 + ( 0.9147 , 0.1008 ) b 3 + ( 0.9740 , 0.2065 ) b 4 R P P O B . We can also obtain R P R P O ( R P R P O B ) R P R P O B .
Definition 20.
Let ( U , V , R P d ) be a dual-domain PF approximation space, U and V be two domains, and R P d be a PF relation on U × V . Then, B P F ( V ) , denote the PF rough approximation operators of B in ( U , V , R P d ) by R P d R P O B , R P d P O B as follows:
R P d R P O B ( b 1 ) = P b 2 V R P O ( R P d ( b 1 , b 2 ) , B ( b 2 ) ) , b 1 U , R P d P O B ( b 1 ) = P b 2 V P O ( R P d ( b 1 , b 2 ) , B ( b 2 ) ) , b 1 U ,
where P O is a PFOF, R P O is the residual implication induced by P O , and the pair ( R P d R P O B , R P d P O B ) is called a dual-domain PFRS of B in ( U , V , R P d ) .
Theorem 5.
Let ( U , V , R P d ) be the PF approximation space. Then, B 1 , B 2 P F ( V ) , the PF rough approximation operators in Definition 20 satisfies the following:
(1) 
B 1 P B 2 R P d R P O B 1 P R P d R P O B 2 ; R P d P O B 1 P R P d P O B 2 ;
(2) 
R P d R P O ( B 1 P B 2 ) = R P d R P O B 1 P R P d R P O B 2 ;
(3) 
R P d P O ( B 1 P B 2 ) = R P d P O B 1 P R P d P O B 2 ;
(4) 
R P d R P O ( B 1 P B 2 ) P R P d R P O B 1 P R P d R P O B 2 ;
(5) 
R P d P O ( B 1 P B 2 ) P R P d P O B 1 P R P d P O B 2 .
Proof. 
The proof method is consistent with Theorem 3 and is omitted. □
Theorem 6.
Let U and V be two domains R P 1 d , R P 2 d P F R ( U × V ) . If R P 1 d P R P 2 d , then for any B P F ( V ) , we have the following:
(1) 
R P 1 d P O B P R P 2 d P O B ;
(2) 
R P 1 d R P O B P R P 2 d R P O B .
Proof. 
The proof method is consistent with Theorem 4 and is omitted. □
Next, the approximate accuracy and roughness of classical rough sets are extended to PFRSs.
Definition 21.
Let ( U , R F ) be a fuzzy approximation space, R F be a fuzzy relation on U, M U , and the two fuzzy sets R F M and R F M be the fuzzy rough approximation operators of M:
(1) 
The approximate accuracy of the fuzzy set M is defined as follows:
α R F ( M ) = m R F M μ F ( m ) m R F M μ F ( m ) ,
It is stipulated that α R F ( ) = 1 ;
(2) 
The roughness of the fuzzy set M is defined as follows:
ρ R F ( M ) = 1 α R F ( M ) .
From Definition 8, we observe that the approximation accuracy of classical rough sets is given by α R ( C ) = | R C | | R C | , where | R C | and | R C | denote the cardinalities of the sets R C and R C , respectively; that is, they represent the number of elements belonging to R C and R C . Since the membership degree of elements in classical sets is either zero or one, the cardinality of a set can also be interpreted as the sum of the membership degrees of its elements. Therefore, the approximation accuracy of fuzzy rough sets (FRSs) extends that of classical rough sets and remains applicable in the classical case.
For m R F M μ F ( m ) , a higher membership degree of elements in R F M leads to a greater value of m R F M μ F ( m ) . Since, in classical fuzzy sets, the sum of the membership and non-membership degrees of each element equals one, an increase in the non-membership degree v F ( m ) results in a decrease in μ F ( m ) , thereby reducing the value of m R F M μ F ( m ) . The same principle applies to m R F M μ F ( m ) .
However, as intuitionistic fuzzy (IF) sets and Pythagorean fuzzy (PF) sets introduce hesitation, these rules no longer hold.
Example 8.
Two PF sets are given as follows:
R P B 1 = ( 0.2 , 0.5 ) b 1 + ( 0.7 , 0.6 ) b 2 + ( 0.5 , 0.4 ) b 3 + ( 0.6 , 0.4 ) b 4 ( 0.2 , 0.6 ) b 1 + ( 0.7 , 0.7 ) b 2 + ( 0.5 , 0.5 ) b 3 + ( 0.6 , 0.5 ) b 4 = R P B 2 ,
However, b R P B 1 μ P ( b ) = b R P B 2 μ P ( b ) , and the approximate accuracy of classical FRSs is no longer applicable to PFRSs.
Definition 22.
Let ( U , R P ) be a PF approximation space, R P be a PF relation on U, B P U , and the two PF sets R P B and R P B be the PF rough approximation operators of B.
(1) 
The approximate accuracy of the PF set B is defined as follows:
α R P ( B ) = b R P B FS ( μ P ( b ) , v P ( b ) ) b R P B FS ( μ P ( b ) , v P ( b ) ) ,
where FS is the fuzzy score functions in Definition 14. It is stipulated that α R P ( ) = 1 ;
(2) 
The roughness of the PF set B is defined as follows:
ρ R P ( B ) = 1 α R P ( B ) .
Example 9.
FS is taken from Example 4(3), and the approximate accuracy of the PFRS B in Example 6 is calculated to be 0.3719. Figure 4 shows the FS values of different elements in B and its upper and lower approximations, which more intuitively reflects the approximate accuracy of B.
Since μ F ( m ) is a special case of FS ( μ F ( m ) , v F ( m ) ) , the approximate accuracy of PFRSs is an extension of FRS and rough sets, and it is also applicable to the classical cases.

6. MCDM Methods Based on PF Information

In this section, we propose different MCDM algorithms based on the PF similarity measure and the PFRSs, give specific algorithm steps, and verify them with examples.

6.1. Typical Case I

6.1.1. Problem Analysis

Given a domain C = { c 1 , c 2 , , c m } as the object domain and a domain A = { a 1 , a 2 , , a n } as the attribute value domain, a n ( c m ) = ( μ n ( c m ) , v n ( c m ) ) , where μ n ( c m ) indicates the degree to which object c m satisfies attribute a n , and v n ( c m ) indicates the degree to which object c m does not satisfy attribute a n , while ( μ n ( c m ) ) 2 + ( v n ( c m ) ) 2 1 . We need to sort the elements in the object domain and select the best one.

6.1.2. Decision Algorithm I

Step 1: First, we evaluate the attribute values of different elements in the object domain, obtain the PF relationship matrix R P d . (In response to specific problems, experts in related fields evaluate the attributes of the elements in the object field and complete the transformation of their attribute values from various numerical values to PF fuzzy information).
Step 2: According to the obtained PF relationship, we find the optimal solution c b and the worst solution c w of this problem, which are the PF sets in the domain A:
a n ( c b ) = ( μ n ( c b ) , v n ( c b ) ) , μ n ( c b ) = m a x ( μ n ( c ) ) c C , v n ( c b ) = m i n ( v n ( c ) ) c C ; a n ( c w ) = ( μ n ( c w ) , v n ( c w ) ) , μ n ( c w ) = m i n ( μ n ( c ) ) c C , v n ( c w ) = m a x ( v n ( c ) ) c C .
Step 3: Using the similarity measure proposed in Definition 16, we calculate the similarity between the PF sets composed of the attributes of the elements in the object domain and the optimal solution S ( c , c b ) c C and the worst solution S ( c , c w ) c C .
Step 4: We use the sorting score function R 1 to evaluate the elements in the object domain and rank them based on their scores. The sorting score function is defined as follows:
R 1 ( c ) = S ( c , c b ) S ( c , c b ) + S ( c , c w ) , c C .

6.1.3. Example Analysis I

We selected the best comprehensive service from four airlines ( c 1 , c 2 , c 3 , and c 4 ) , with attribute values ( a 1 , a 2 , a 3 , and a 4 ) representing the reservation service, check-in procedure, cabin service, and responsiveness, respectively [43,44,45]. This problem involves a comprehensive evaluation of multiple airlines (decision-making schemes) based on attributes such as reservation services, check-in procedures, cabin services, and responsiveness. It requires weighting and ranking these attributes, making it a typical MCDM problem. The MCDM method quantifies the impact of various attributes, evaluates the pros and cons of each scheme, and helps decision makers objectively and systematically select the airline with the best overall service, thus enhancing the scientific and rational basis of the decision. To address this, we used Decision Algorithm I. Table 5 presents the expert evaluation matrix for the attributes of each airline.
According to the attribute value evaluation matrix, the optimal solution c b and the worst solution c w are as follows:
c b = ( 0.9 , 0.2 ) a 1 + ( 0.9 , 0.2 ) a 2 + ( 0.8 , 0.1 ) a 3 + ( 0.7 , 0.3 ) a 4 ; c w = ( 0.4 , 0.7 ) a 1 + ( 0.7 , 0.6 ) a 2 + ( 0.5 , 0.8 ) a 3 + ( 0.5 , 0.6 ) a 4 ;
When taking Example 4(1) as FS in the similarity measure, the similarities between different objects and the optimal solution and worst solution are calculated as follows (Table 6):
We used the sorting score function R 1 to score the elements in the object domain, and the results are as follows:
R 1 ( C ) = 0.4824 c 1 + 0.5399 c 2 + 0.5596 c 3 + 0.5713 c 4 .
Therefore, the final result was c 4 > c 3 > c 2 > c 1 .

6.1.4. Decision Algorithm II

Step 1: First, we evaluate the attribute values of different elements in the object domain and obtain the PF relationship matrix R P d .
Step 2: According to the obtained PF relationship, we find the optimal solution c b and the worst solution c w of this problem, which are the PF sets in the domain A:
a n ( c b ) = ( μ n ( c b ) , v n ( c b ) ) , μ n ( c b ) = m a x ( μ n ( c ) ) c C , v n ( c b ) = m i n ( v n ( c ) ) c C ; a n ( c w ) = ( μ n ( c w ) , v n ( c w ) ) , μ n ( c w ) = m i n ( μ n ( c ) ) c C , v n ( c w ) = m a x ( v n ( c ) ) c C .
Step 3: We calculate the PF rough approximation operators of the PF sets c b , c w in the PF approximation space ( C , A , R P d ) : R P d P O c b , R P d R P O c b , R P d P O c w , R P d R P O c w .
Step 4: We use the PFOFs to aggregate the PF rough approximation operators obtained; that is, we have
OPT = P O ( R P d P O c b , R P d R P O c b ) ; WO = P O ( R P d P O c w , R P d R P O c w ) .
Step 5: We use the sorting score function R 2 to score the elements in OPT and WO and finally sort them according to the scores. The sorting score function is as follows:
R 2 ( x ) = FS ( OPT ( c ) ) FS ( OPT ( c ) ) + FS ( WO ( c ) ) , c C .
where FS is the fuzzy score functions in Definition 14.
Remark 7.
Since the fuzzy scoring function is a family of functions, not all fuzzy scoring functions are suitable for solving specific problems. It is crucial to select an appropriate fuzzy scoring function. Therefore, different fuzzy scoring functions should be applied and compared with results from other methods to choose the correct parameters.

6.1.5. Example Analysis II

We apply Decision Algorithm II to the example in Section 6.1.3.
From Section 6.1.3, we can know that the optimal solution c b and the worst solution c w are as follows:
c b = ( 0.9 , 0.2 ) a 1 + ( 0.9 , 0.2 ) a 2 + ( 0.8 , 0.1 ) a 3 + ( 0.7 , 0.3 ) a 4 ; c w = ( 0.4 , 0.7 ) a 1 + ( 0.7 , 0.6 ) a 2 + ( 0.5 , 0.8 ) a 3 + ( 0.5 , 0.6 ) a 4 .
We take the P O and R P O of Example 5(2) and calculate the PF rough approximation operators of c b and c w for R P d P O c b , R P d R P O c b , R P d P O c w , and R P d R P O c w as follows:
R P d P O c b = ( 0.9487 , 0.2146 ) c 1 + ( 0.9487 , 0.0708 ) c 2 + ( 0.8944 , 0.1421 ) c 3 + ( 0.8944 , 0.1421 ) c 4 ; R P d R P O c b = ( 0.4900 , 0.4146 ) c 1 + ( 0.4900 , 0.4146 ) c 2 + ( 0.4900 , 0.4146 ) c 3 + ( 0.4900 , 0.2800 ) c 4 ; R P d P O c w = ( 0.8367 , 0.4472 ) c 1 + ( 0.8367 , 0.4472 ) c 2 + ( 0.8367 , 0.4472 ) c 3 + ( 0.8367 , 0.4472 ) c 4 ; R P d R P O c w = ( 0.1600 , 0.9330 ) c 1 + ( 0.1600 , 0.9330 ) c 2 + ( 0.1600 , 0.9330 ) c 3 + ( 0.1600 , 0.9330 ) c 4 .
For P O in Example 5(1), we aggregated the obtained PF rough approximation operators, obtaining OPT and WO as follows:
OPT = ( 0.4696 , 0.4583 ) c 1 + ( 0.4696 , 0.4196 ) c 2 + ( 0.4383 , 0.4343 ) c 3 + ( 0.4383 , 0.3115 ) c 4 ; WO = ( 0.1339 , 0.9467 ) c 1 + ( 0.1339 , 0.9647 ) c 2 + ( 0.1339 , 0.9647 ) c 3 + ( 0.1339 , 0.9647 ) c 4 .
We took FS in Example 4(1) and used the ranking score function R 2 to score the elements in the object domain. The results are as follows:
R 2 ( C ) = 0.8013 c 1 + 0.8076 c 2 + 0.8015 c 3 + 0.8200 c 4 .
Therefore, the final result was c 4 > c 2 > c 3 > c 1 .

6.1.6. Comparative Analysis

Figure 5 and Table 7 show the ranking results of the proposed method and existing methods.
As shown in Figure 5 and Table 7, the worst choice identified by all existing methods was c 1 , while the best choice was either c 4 or c 3 , with c 4 being the predominant selection. The results obtained by Algorithms I and II were consistent with this pattern. Specifically, the results of Algorithm I aligned with those of the methods proposed by Ma(1), Wang, and Xue, whereas the results of Algorithm II matched those of Ma(2) and Xu. This comparison demonstrates the efficiency and rationality of the approach presented in this section.

6.2. Typical Case II

6.2.1. Problem Analysis

Given the domain C = { c 1 , c 2 , , c m } as the pattern domain and the domain A = { a 1 , a 2 , , a n } as the attribute value domain, a n ( c m ) = ( μ n ( c m ) , v n ( c m ) ) , where μ n ( c m ) indicates the degree to which elements in the pattern c m need to have the attribute a n , v n ( c m ) indicates the degree to which elements in the pattern c m do not need to have the attribute a n , and ( μ n ( c m ) ) 2 + ( v n ( c m ) ) 2 1 . The existing individual c s C is a PF set in the domain A, and its pattern affiliation needs to be determined based on its attribute value.

6.2.2. Decision Algorithm III

Step 1: First, we evaluate the attribute values of different elements in the pattern domain and obtain the PF relationship matrix R P d .
Step 2: Based on the obtained PF relationship, we calculate the PF rough approximation operators R P d P O c s , R P d R P O c s of individual c s in the PF approximation space ( C , A , R P d ) .
Step 3: We use the PFOFs to aggregate the PF rough approximation operators obtained:
RESULT = P O ( R P d P O c s , R P d R P O c s ) .
Step 4: We use the fuzzy score functions FS to score the elements in the RESULT and finally sort them according to the scores.

6.2.3. Example Analysis III

Decision Algorithm III was applied to solve the typical problem raised in the literature [24]. Let C = { c 1 , c 2 , c 3 , c 4 , c 5 } represent the five departments of a company for recruitment, which are the administrative department, human resources department, R&D department, finance department, and sales department. Another domain A = { a 1 , a 2 , a 3 , a 4 , a 5 , a 6 } represents the six abilities which the company requires job seekers to have, which are mathematical ability, computer level, English speaking, interpersonal relationship handling, adaptability, and organizational management ability. Table 8 is the PF relationship between C and A, which is the scoring standard for the company to recruit new employees. Through evaluation, an employed person obtains the following six abilities: c s = ( 0.7 , 0.4 ) a 1 + ( 0.9 , 0.2 ) a 2 + ( 0.7 , 0.4 ) a 3 + ( 0.6 , 0.2 ) a 4 + ( 0.6 , 0.5 ) a 5 + ( 0.8 , 0.5 ) a 6 .
When taking the P O and R P O from Example 5(2), we calculated the PF rough approximation operators of c s as follows:
R P d P O c s = ( 0.8944 , 0.2889 ) c 1 + ( 0.8367 , 0.3660 ) c 2 + ( 0.9487 , 0.1421 ) c 3 + ( 0.8367 , 0.2889 ) c 4 + ( 0.8367 , 0.2889 ) c 5 ; R P d R P O c s = ( 0.3600 , 0.6641 ) c 1 + ( 0.3600 , 0.6641 ) c 2 + ( 0.3600 , 0.6641 ) c 3 + ( 0.3600 , 0.6641 ) c 4 + ( 0.3600 , 0.6641 ) c 5 .
When taking the P O in Example 5(1) and aggregating the PF rough approximation operators, we had
RESULT = ( 0.3320 , 0.6960 ) c 1 + ( 0.3012 , 0.7161 ) c 2 + ( 0.3415 , 0.6700 ) c 3 + ( 0.3012 , 0.6960 ) c 4 + ( 0.3012 , 0.6960 ) c 5 ;
We took FS in Example 4(1) and scored the elements in result:
FS ( RESULT ) = 0.2914 c 1 + 0.2718 c 2 + 0.3135 c 3 + 0.2819 c 4 + 0.2819 c 5 .
Therefore, the final result was c 3 > c 1 > c 4 = c 5 > c 2 .

6.2.4. Comparative Analysis

By changing the PFOFs and fuzzy score functions in Algorithm III, Figure 6 and Table 9 display the ranking results for various parameter settings. Case 1 was the result obtained in Section 6.2.3; Case 2 changed the aggregation operator of Step 3 in Section 6.2.3 from the PFOFs to b 1 b 2 = ( μ P ( b 1 ) 2 + μ P ( b 2 ) 2 μ P ( b 1 ) 2 μ P ( b 2 ) 2 , v P ( b 1 ) 2 v P ( b 2 ) 2 ) in the literature [23]; Case 3 changed the fuzzy score function of Step 4 in Section 6.2.3 to Example 4(2); Case 4 changed the fuzzy score function of Step 4 in Section 6.2.3 to Example 4(3); Case 5 changed the fuzzy score function of Step 4 in Section 6.2.3 to S ( b ) = μ P ( b ) 2 v P ( b ) 2 in the literature [23]; and Case 6 changed the PFOF of Step 2 in Section 6.2.3 to Example 5(1) and changed the PFOF of Step 3 in Section 6.2.3 to Example 5(2).
It can be seen from the images and tables that the results obtained from Cases 1–5 were all c 3 > c 1 > c 4 = c 5 > c 2 , which is consistent with the results obtained in the literature, indicating the effectiveness of the method. However, the order of c 4 and c 5 in the literature could not be distinguished, which is a defect. By changing the parameters in the algorithm as shown in Case 6, the above defect can be overcome, which shows that the PFRSs based on the PFOFs had greater flexibility and superiority than the classical PFRSs.
As shown in the images and tables, the results obtained from Cases 1–5 followed the order c 3 > c 1 > c 4 = c 5 > c 2 , which is consistent with the results reported in the literature, indicating the effectiveness of the method. However, in the literature, the order of c 4 and c 5 could not be distinguished, which is a limitation. By adjusting the parameters in the algorithm, as demonstrated in Case 6, this limitation can be overcome, demonstrating that PFRSs based on PFOFs offer greater flexibility and superiority over classical PFRSs.
The algorithm proposed in this paper has certain limitations. Specifically, since the algorithm does not include a mechanism for handling unequal attribute weights, it cannot be directly applied to problems involving attribute weights. However, this issue can be addressed. For example, when constructing the PF relationship matrix, one can first apply the aggregation operator λ b = ( 1 ( 1 μ P ( b ) 2 ) λ , ( v P ( b ) ) λ ) to integrate the attribute weights with their corresponding PF information and then proceed with the proposed algorithm.

7. Conclusions

In summary, this paper introduces and explores several developments in the PF fuzzy domain. First, we proposed the concept of PFOFs, analyzed their properties, and established a general construction approach. Next, a novel similarity measure for PF sets was presented, with its effectiveness and rationality demonstrated through examples. Furthermore, a generalized PFRSs model was formulated using PFOFs and fuzzy implications, extending it to the dual domain and providing a method for assessing the approximation accuracy of PFRSs. Finally, three MCDM algorithms based on PF information were developed, with examples illustrating their effectiveness and practicality. In the future, PFRSs models can be further extended to investigate covering-PFRSs based on PFOFs [48].

Author Contributions

X.Z. gave the idea and framework of this paper; Y.Y. analyzed the properties of PFOFs and rough sets and wrote this paper; J.W. reviewed and improved this paper; Y.Y. and J.W. are co-first authors; J.W. is the supervisor; Writing—original draft, Y.Y.; Writing—review & editing, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 12271319 and 12201373).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. De, S.K.; Biswas, R.; Roy, A.R. An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst. 2001, 117, 209–213. [Google Scholar] [CrossRef]
  3. Shouyu, C.; Yu, G. Variable fuzzy sets and its application in comprehensive risk evaluation for flood-control engineering system. Fuzzy Optim. Decis. Mak. 2006, 5, 153–162. [Google Scholar] [CrossRef]
  4. Zhan, J.; Cai, M. A cost-minimized two-stage three-way dynamic consensus mechanism for social network-large scale group decision-making: Utilizing K-nearest neighbors for incomplete fuzzy preference relations. Expert Syst. Appl. 2024, 263, 125705. [Google Scholar] [CrossRef]
  5. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  6. Bello, R.; Falcon, R. Rough sets in machine learning: A review. In Thriving Rough Sets: 10th Anniversary-Honoring Professor Zdzisław Pawlak’s Life and Legacy & 35 Years of Rough Sets; Springer: Cham, Switzerland, 2017; pp. 87–118. [Google Scholar]
  7. Geng, Z.; Zhu, Q. Rough set-based heuristic hybrid recognizer and its application in fault diagnosis. Expert Syst. Appl. 2009, 36, 2711–2718. [Google Scholar] [CrossRef]
  8. Ziarko, W. Discovery through rough set theory. Commun. ACM 1999, 42, 54–57. [Google Scholar] [CrossRef]
  9. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  10. Theerens, A.; Cornelis, C. Fuzzy rough sets based on fuzzy quantification. Fuzzy Sets Syst. 2023, 473, 108704. [Google Scholar] [CrossRef]
  11. Dong, L.; Wang, R.; Chen, D. Incremental feature selection with fuzzy rough sets for dynamic data sets. Fuzzy Sets Syst. 2023, 467, 108503. [Google Scholar] [CrossRef]
  12. Zhang, X.; Ou, Q.; Wang, J. Variable precision fuzzy rough sets based on overlap functions with application to tumor classification. Inf. Sci. 2024, 666, 120451. [Google Scholar] [CrossRef]
  13. Zhao, J.; Wu, D.; Wu, J.; Ye, W.; Huang, F.; Wang, J.; See-To, E.W. Consistency approximation: Incremental feature selection based on fuzzy rough set theory. Pattern Recognit. 2024, 155, 110652. [Google Scholar] [CrossRef]
  14. Bu, H.; Wang, J.; Shao, S.; Zhang, X. A novel model of fuzzy rough sets based on grouping functions and its application. Comput. Appl. Math. 2025, 44, 1–31. [Google Scholar] [CrossRef]
  15. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  16. Zhang, X.; Zhou, B.; Li, P. A general frame for intuitionistic fuzzy rough sets. Inf. Sci. 2012, 216, 34–49. [Google Scholar] [CrossRef]
  17. Huang, B.; Guo, C.x.; Zhuang, Y.l.; Li, H.x.; Zhou, X.z. Intuitionistic fuzzy multigranulation rough sets. Inf. Sci. 2014, 277, 299–320. [Google Scholar] [CrossRef]
  18. Xue, Z.; Zhao, L.; Sun, L.; Zhang, M.; Xue, T. Three-way decision models based on multigranulation support intuitionistic fuzzy rough sets. Int. J. Approx. Reason. 2020, 124, 147–172. [Google Scholar] [CrossRef]
  19. Wang, J.; Zhang, X. Two types of intuitionistic fuzzy covering rough sets and an application to multiple criteria group decision making. Symmetry 2018, 10, 462. [Google Scholar] [CrossRef]
  20. Wang, J.; Zhang, X. Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets. Axioms 2024, 13, 411. [Google Scholar] [CrossRef]
  21. Wen, X.; Zhang, X.; Lei, T. Intuitionistic fuzzy (IF) overlap functions and IF-rough sets with applications. Symmetry 2021, 13, 1494. [Google Scholar] [CrossRef]
  22. Yager, R.R. Pythagorean fuzzy subsets. In Proceedings of the 2013 Joint IFSA World Congress and NAFIPS Annual Meeting (IFSA/NAFIPS), Edmonton, AB, Canada, 24–28 June 2013; pp. 57–61. [Google Scholar]
  23. Zhang, X.; Xu, Z. Extension of TOPSIS to multiple criteria decision making with Pythagorean fuzzy sets. Int. J. Intell. Syst. 2014, 29, 1061–1078. [Google Scholar] [CrossRef]
  24. Zhang, C.; Li, D. Pythagorean fuzzy rough sets and its applications in multi-attribute decision making. J. Chin. Comput. Syst. 2016, 37, 1531–1535. [Google Scholar]
  25. Zhang, C.; Li, D.; Ren, R. Pythagorean fuzzy multigranulation rough set over two universes and its applications in merger and acquisition. Int. J. Intell. Syst. 2016, 31, 921–943. [Google Scholar] [CrossRef]
  26. Zhan, J.; Sun, B.; Zhang, X. PF-TOPSIS method based on CPFRS models: An application to unconventional emergency events. Comput. Ind. Eng. 2020, 139, 106192. [Google Scholar] [CrossRef]
  27. Sun, B.; Tong, S.; Ma, W.; Wang, T.; Jiang, C. An approach to MCGDM based on multi-granulation Pythagorean fuzzy rough set over two universes and its application to medical decision problem. Artif. Intell. Rev. 2022, 55, 1887–1913. [Google Scholar] [CrossRef]
  28. Ye, J.; Sun, B.; Bao, Q.; Che, C.; Huang, Q.; Chu, X. A new multi-objective decision-making method with diversified weights and Pythagorean fuzzy rough sets. Comput. Ind. Eng. 2023, 182, 109406. [Google Scholar] [CrossRef]
  29. Zhao, J.; Wan, R.; Miao, D. Conflict Analysis Triggered by Three-Way Decision and Pythagorean Fuzzy Rough Set. Int. J. Comput. Intell. Syst. 2024, 17, 17. [Google Scholar] [CrossRef]
  30. Bustince, H.; Fernandez, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap functions. Nonlinear Anal. Theory Methods Appl. 2010, 72, 1488–1499. [Google Scholar] [CrossRef]
  31. Qiao, J.; Hu, B.Q. On interval additive generators of interval overlap functions and interval grouping functions. Fuzzy Sets Syst. 2017, 323, 19–55. [Google Scholar] [CrossRef]
  32. Bustince, H.; Fernández, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap index, overlap functions and migrativity. In Proceedings of the 2009 International Fuzzy Systems Association World Congress and 2009 European Society for Fuzzy Logic and Technology Conference, Lisbon, Portugal, 20–24 July 2009. [Google Scholar]
  33. Zeng, W.; Li, D.; Yin, Q. Distance and similarity measures of Pythagorean fuzzy sets and their applications to multiple criteria group decision making. Int. J. Intell. Syst. 2018, 33, 2236–2254. [Google Scholar] [CrossRef]
  34. Jia, Z.; Qiao, J.; Chen, M. On Similarity Measures Between Pythagorean Fuzzy Sets Derived from Overlap and Grouping Functions. Int. J. Fuzzy Syst. 2023, 25, 2380–2396. [Google Scholar] [CrossRef]
  35. Hung, W.L.; Yang, M.S. Similarity measures of intuitionistic fuzzy sets based on Hausdorff distance. Pattern Recognit. Lett. 2004, 25, 1603–1611. [Google Scholar] [CrossRef]
  36. Ye, J. Cosine similarity measures for intuitionistic fuzzy sets and their applications. Math. Comput. Model. 2011, 53, 91–97. [Google Scholar] [CrossRef]
  37. Chen, S.M. Similarity measures between vague sets and between elements. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 1997, 27, 153–158. [Google Scholar] [CrossRef]
  38. Li, Y.; Olson, D.L.; Qin, Z. Similarity measures between intuitionistic fuzzy (vague) sets: A comparative analysis. Pattern Recognit. Lett. 2007, 28, 278–285. [Google Scholar] [CrossRef]
  39. Hong, D.H.; Kim, C. A note on similarity measures between vague sets and between elements. Inf. Sci. 1999, 115, 83–96. [Google Scholar] [CrossRef]
  40. Peng, X.; Yuan, H.; Yang, Y. Pythagorean fuzzy information measures and their applications. Int. J. Intell. Syst. 2017, 32, 991–1029. [Google Scholar] [CrossRef]
  41. Zhang, Q.; Hu, J.; Feng, J.; Liu, A.; Li, Y. New similarity measures of Pythagorean fuzzy sets and their applications. IEEE Access 2019, 7, 138192–138202. [Google Scholar] [CrossRef]
  42. Zhang, S.P.; Sun, P.; Mi, J.S.; Feng, T. Belief function of Pythagorean fuzzy rough approximation space and its applications. Int. J. Approx. Reason. 2020, 119, 58–80. [Google Scholar] [CrossRef]
  43. Ma, Z.; Xu, Z. Symmetric Pythagorean fuzzy weighted geometric/averaging operators and their application in multicriteria decision-making problems. Int. J. Intell. Syst. 2016, 31, 1198–1219. [Google Scholar] [CrossRef]
  44. Xu, T.T.; Zhang, H.; Li, B.Q. Pythagorean fuzzy entropy and its application in multiple-criteria decision-making. Int. J. Fuzzy Syst. 2020, 22, 1552–1564. [Google Scholar] [CrossRef]
  45. Wang, Z.; Xiao, F.; Cao, Z. Uncertainty measurements for Pythagorean fuzzy set and their applications in multiple-criteria decision making. Soft Comput. 2022, 26, 9937–9952. [Google Scholar] [CrossRef]
  46. Rani, P.; Mishra, A.R.; Pardasani, K.R.; Mardani, A.; Liao, H.; Streimikiene, D. A novel VIKOR approach based on entropy and divergence measures of Pythagorean fuzzy sets to evaluate renewable energy technologies in India. J. Clean. Prod. 2019, 238, 117936. [Google Scholar] [CrossRef]
  47. Xue, W.; Xu, Z.; Zhang, X.; Tian, X. Pythagorean fuzzy LINMAP method based on the entropy theory for railway project investment decision making. Int. J. Intell. Syst. 2018, 33, 93–125. [Google Scholar] [CrossRef]
  48. Yan, C.; Zhang, H. Attribute reduction methods based on Pythagorean fuzzy covering information systems. IEEE Access 2020, 8, 28484–28495. [Google Scholar] [CrossRef]
Figure 1. Relationships among related fuzzy rough sets.
Figure 1. Relationships among related fuzzy rough sets.
Fractalfract 09 00168 g001
Figure 2. Difference between rough sets and classical sets.
Figure 2. Difference between rough sets and classical sets.
Fractalfract 09 00168 g002
Figure 3. Images with different FS values.
Figure 3. Images with different FS values.
Fractalfract 09 00168 g003
Figure 4. FS values of elements in different sets.
Figure 4. FS values of elements in different sets.
Fractalfract 09 00168 g004
Figure 5. Ranking results between different methods.
Figure 5. Ranking results between different methods.
Fractalfract 09 00168 g005
Figure 6. Ranking results in different situations.
Figure 6. Ranking results in different situations.
Fractalfract 09 00168 g006
Table 1. Existing similarity measures.
Table 1. Existing similarity measures.
AuthorSimilarity Measure
Hung [35] S H Y ( B 1 , B 2 ) = 1 1 n i = 1 n max ( | μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) | , | v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) | )
Ye [36] S Y ( B 1 , B 2 ) = 1 n i = 1 n μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) + v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) μ P ( B 1 ( b i ) ) 2 + v P ( B 1 ( b i ) ) 2 μ P ( B 2 ( b i ) ) 2 + v P ( B 2 ( b i ) ) 2
Chen [37] S C ( B 1 , B 2 ) = 1 1 2 n i = 1 n | μ P ( B 1 ( b i ) ) v P ( B 1 ( b i ) ) ( μ P ( B 2 ( b i ) ) v P ( B 2 ( b i ) ) ) |
Li [38] S L ( B 1 , B 2 ) = 1 i = 1 n ( ( μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) ) 2 + ( v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) ) 2 ) 2 n
Hong [39] S H ( B 1 , B 2 ) = 1 1 2 n i = 1 n ( | μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) | + | v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) | )
Peng [40] S P ( B 1 , B 2 ) = 1 1 2 n i = 1 n | μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) ( v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) ) |
Zhang [41] S Z ( B 1 , B 2 ) = 1 n i = 1 n 2 1 max | μ P ( B 1 ( b i ) ) 2 μ P ( B 2 ( b i ) ) 2 | , | v P ( B 1 ( b i ) ) 2 v P ( B 2 ( b i ) ) 2 | , | π P ( B 1 ( b i ) ) 2 π P ( B 2 ( b i ) ) 2 | 1
Qiao [34] S Q ( B 1 , B 2 ) = i = 1 n ω i O ( N a ( | μ P ( B 1 ( b i ) ) μ P ( B 2 ( b i ) ) | k ) , N b ( | v P ( B 1 ( b i ) ) v P ( B 2 ( b i ) ) | k ) ) k
Table 2. Calculation results of different similarity measures in numerical experiment 1.
Table 2. Calculation results of different similarity measures in numerical experiment 1.
Case1234
b 1 = ( 0 . 5 , 0 . 5 ) b 1 = ( 0 . 6 , 0 . 4 ) b 1 = ( 0 , 0 . 87 ) b 1 = ( 0 . 6 , 0 . 27 )
b 2 = ( 0 , 0 ) b 2 = ( 0 , 0 ) b 2 = ( 0 . 28 , 0 . 55 ) b 2 = ( 0 . 28 , 0 . 55 )
S H Y 0.50000.40000.68000.6800
S Y N A N N A N 0.89120.7794
S C 1.00000.90000.70000.7000
S L 0.50000.49010.69930.6993
S H 0.50000.50000.70000.7000
S P 1.00000.90000.73360.7444
S Z 0.41420.39470.45960.6454
S Q 0.56250.53760.82720.8272
Our method0.94950.86890.18830.5391
Table 3. Calculation results of different similarity measures in numerical experiment 2.
Table 3. Calculation results of different similarity measures in numerical experiment 2.
S ( B 1 , B 4 ) S ( B 2 , B 4 ) S ( B 3 , B 4 ) Maximum
S H Y 0.87500.75000.9000 S ( B 3 , B 4 )
S Y 1.00001.00000.9969Unable to judge
S C 1.00001.00000.9750Unable to judge
S L 0.86770.72610.9134 S ( B 3 , B 4 )
S H 0.87500.75000.9250 S ( B 3 , B 4 )
S P 1.00001.00000.9775Unable to judge
S Z 0.77220.62110.8726 S ( B 3 , B 4 )
S Q 0.98250.92500.9925 S ( B 3 , B 4 )
Our method0.75960.56190.7851 S ( B 3 , B 4 )
Table 4. PF relation.
Table 4. PF relation.
b 1 b 2 b 3 b 4
b 1 (1, 0)(0.8, 0.4)(0.7, 0.2)(0.5, 0.6)
b 2 (0.8, 0.4)(1, 0)(0.5, 0.5)(0.8, 0)
b 3 (0.7, 0.2)(0.5, 0.5)(1, 0)(0.2, 0.9)
b 4 (0.5, 0.6)(0.8, 0)(0.2, 0.9)(1, 0)
Table 5. Attribute evaluation value matrix.
Table 5. Attribute evaluation value matrix.
a 1 a 2 a 3 a 4
c 1 ( 0.9 , 0.3 ) ( 0.7 , 0.6 ) ( 0.5 , 0.8 ) ( 0.6 , 0.3 )
c 2 ( 0.4 , 0.7 ) ( 0.9 , 0.2 ) ( 0.8 , 0.1 ) ( 0.5 , 0.3 )
c 3 ( 0.8 , 0.4 ) ( 0.7 , 0.5 ) ( 0.6 , 0.2 ) ( 0.7 , 0.4 )
c 4 ( 0.7 , 0.2 ) ( 0.8 , 0.2 ) ( 0.8 , 0.4 ) ( 0.6 , 0.6 )
Table 6. Similarity calculation results.
Table 6. Similarity calculation results.
c 1 c 2 c 3 c 4
c b 0.72020.80890.80950.8298
c w 0.77290.68930.63690.6227
Table 7. Ranking results between different methods.
Table 7. Ranking results between different methods.
Ranking Result
Ma(1) [43] c 4 > c 3 > c 2 > c 1
Ma(2) [43] c 4 > c 2 > c 3 > c 1
Xu ([44]) c 4 > c 2 > c 3 > c 1
Wang [45] c 4 > c 3 > c 2 > c 1
Peng [40] c 3 > c 4 > c 2 > c 1
Rani [46] c 3 > c 4 > c 2 > c 1
Xue [47] c 4 > c 3 > c 2 > c 1
Algorithm I c 4 > c 3 > c 2 > c 1
Algorithm II c 4 > c 2 > c 3 > c 1
Table 8. Scoring criteria for recruitment of new employees.
Table 8. Scoring criteria for recruitment of new employees.
a 1 a 2 a 3 a 4 a 5 a 6
c 1 (0.3, 0.9)(0.2, 0.8)(0.7, 0.3)(0.7, 0.4)(0.8, 0.3)(0.8, 0.2)
c 2 (0.1, 0.9)(0.4, 0.7)(0.6, 0.5)(0.3, 0.8)(0.8, 0.4)(0.7, 0.5)
c 3 (0.8, 0.2)(1, 0)(0.7, 0.3)(0.4, 0.7)(0.5, 0.6)(0.6, 0.5)
c 4 (0.7, 0.3)(0.5, 0.4)(0.8, 0.4)(0.6, 0.5)(0.7, 0.4)(0.6, 0.6)
c 5 (0.2, 0.6)(0.1, 0.7)(0.7, 0.2)(0.6, 0.5)(0.9, 0.1)(0.7, 0.6)
Table 9. Ranking results in different situations.
Table 9. Ranking results in different situations.
Ranking Result
Zhang c 3 > c 1 > c 4 = c 5 > c 2
Case 1 c 3 > c 1 > c 4 = c 5 > c 2
Case 2 c 3 > c 1 > c 4 = c 5 > c 2
Case 3 c 3 > c 1 > c 4 = c 5 > c 2
Case 4 c 3 > c 1 > c 4 = c 5 > c 2
Case 5 c 3 > c 1 > c 4 = c 5 > c 2
Case 6 c 3 > c 1 > c 4 > c 5 > c 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, Y.; Wang, J.; Zhang, X. Pythagorean Fuzzy Overlap Functions and Corresponding Fuzzy Rough Sets for Multi-Attribute Decision Making. Fractal Fract. 2025, 9, 168. https://doi.org/10.3390/fractalfract9030168

AMA Style

Yan Y, Wang J, Zhang X. Pythagorean Fuzzy Overlap Functions and Corresponding Fuzzy Rough Sets for Multi-Attribute Decision Making. Fractal and Fractional. 2025; 9(3):168. https://doi.org/10.3390/fractalfract9030168

Chicago/Turabian Style

Yan, Yongjun, Jingqian Wang, and Xiaohong Zhang. 2025. "Pythagorean Fuzzy Overlap Functions and Corresponding Fuzzy Rough Sets for Multi-Attribute Decision Making" Fractal and Fractional 9, no. 3: 168. https://doi.org/10.3390/fractalfract9030168

APA Style

Yan, Y., Wang, J., & Zhang, X. (2025). Pythagorean Fuzzy Overlap Functions and Corresponding Fuzzy Rough Sets for Multi-Attribute Decision Making. Fractal and Fractional, 9(3), 168. https://doi.org/10.3390/fractalfract9030168

Article Metrics

Back to TopTop