Next Article in Journal
Weighted Integral Operators from the Hv Space to the Bμ Space on Cartan–Hartogs Domains
Previous Article in Journal
Notes on a New Class of Univalent Starlike Functions with Respect to a Boundary Point
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Way Approximations with Covering-Based Rough Set

College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 721; https://doi.org/10.3390/axioms14100721
Submission received: 31 August 2025 / Revised: 18 September 2025 / Accepted: 19 September 2025 / Published: 24 September 2025

Abstract

In order to approximate an undefinable set of objects by using the extensions in OE-concept lattices, this study combines three-way concept analysis with covering-based rough set and introduces an innovative approach for managing uncertain information and decision-making. This approach employs the minimal neighborhood of the maximal description, which is determined by meet-irreducible elements, to define the lower and upper of an undefinable set. On this basis, we formalize the concepts of lower and upper approximation OE-concepts and propose a three-way approximation optimization algorithm. Experimental results demonstrate the effectiveness and efficiency of our algorithm.

1. Introduction

Rough set is a powerful tool for data analysis and knowledge discovery. Initially introduced by the Polish scholar Pawlak [1] in 1982. Since its inception, rough set theory has undergone substantial advancements in both theoretical extensions and practical applications. For instance, Stepaniuk and Skowron [2] explored a three-way rough set-based approach for decision granule approximation and developed a novel approximation space to optimize the characterization of compound decision granules. Janusz and Ślęzak [3] proposed rough set methods for attribute clustering and selection in which greedy heuristics are incorporated to reduce computation times and generate multiple decision reducts. Inspired by the concepts of lower and upper approximations in rough set theory, Siminski and Wakulicz-Deja [4] proposed an extended forward inference algorithm. This method enhances classical forward inference by enabling the process to continue after an inference failure through the interactive supplementation of missing facts. Rough set is established upon the indiscernibility relations that exist within the universe of discourse. However, the indiscernibility relations may sometimes restrict the practical applications of rough set [5]. To overcome this limitation, Zakowsk [6] proposed covering-based rough sets by utilizing cover relations instead of indiscernibility relations. To address issues of the minimal and maximal descriptions of covering-based rough sets, Wang et al. [7] devised matrix approaches that facilitate these descriptions and introduced two types of covering information reductions under these descriptions. Zhu [8] further enriched the field by identifying four distinct types of covering-based rough sets and investigated their topological properties, which has been instrumental in understanding the connections among different covering-based rough sets. Moreover, Zhang et al. [9] built the equivalences between the four types of covering-based rough sets and traditional rough set, and also presented sufficient and necessary conditions for these equivalences. Yao et al. [10] presented a framework for approximating covering-based rough sets by constructing four distinct neighborhoods, six novel coverings, and two sub-systems. They also proposed approximation operators based on elements, granules, and subsystems relevant to covering-based rough sets and discussed the relationships between these operators and existing ones. Janicki et al. [11] defined optimal approximation in the context of similarity measures and provided an algorithm that employs the Jaccard Index as an optimal approximation operator.
Formal Concept Analysis (FCA), introduced by Wille in 1982, has evolved into a valuable method utilized in data mining, information retrieval, knowledge discovery, and various other fields [12,13,14,15,16,17]. Some scholars have expanded FCA by introducing multiple types of formal concepts, which significantly enriches the original FCA. In 2014, Qi et al. [18] established three-way operators and their inverses and introduced two distinct types of three-way concepts: the object-induced concept and the attribute-induced concept. Furthermore, they integrated these concepts within the framework of three-way decision to construct three-way concept lattices. Zhang et al. [19] proposed a novel algorithm, termed 3WOC, which can directly and quickly generate all OEO concepts with lower time complexity. The introduction of the three-way concept lattice has since prompted widespread research interests across multiple scholarly fields including lattice construction [20,21,22,23,24], lattice reduction [25,26,27], rule acquisition derived from lattice analysis [28,29,30], and conceptual knowledge [31,32,33].
Some researchers have incorporated rough sets into formal concept analysis to investigate the uncertainty present in FCA [34,35,36]. Yao et al. [37] utilized approximation operators to represent and elucidate classical concept lattices and studied three distinct types of concept lattices: object-oriented, attribute-oriented, and complement-oriented concept lattices. Furthermore, Yao [34] developed a set of operators drawing from both lattice theory and set theory and investigated the interconnections between these operators. Building on these foundations, Monhanty [38] generalized approximation operators on the formal concepts and represented undefinable sets by employing two definable sets in a concept context. Mao et al. [39] proposed the variable precision rough set approximations within concept lattices informed by FCA, thereby facilitating the approximation of undefinable sets and analyzing trends in the changes of the β -upper and lower approximations. Recently, some researchers have traditionally constructed a complete concept lattice before employing approximation operators to approximately characterize conceptual knowledge. However, this type of approach overlooks the fact that constructing a concept lattice is an NP-hard problem. Existing approximation methods require the computation of all concepts, which results in significant computational time, especially when dealing with massive datasets. If approximate characterization can be achieved without building the entire concept lattice, the computational cost of mining conceptual knowledge would be significantly reduced. Therefore, lowering the time complexity of the computational process is a meaningful research task. This study aims to optimize the computational process of approximation methods by improving knowledge representation, thereby decreasing computational time. This improvement lays the foundation for a more efficient and concise knowledge representation in subsequent three-way concept lattices.
Building on the foundational work reference [39], this paper introduces an innovative approach for developing approximation operators within the framework of three-way concept lattices. The primary contribution of this work is to extend the framework of the object-induced three-way concept. The organization of the rest of this paper is outlined below. Section 2 provides an overview of the foundational concepts of formal concept analysis. Section 3 and Section 4 elaborate on the extended framework of the OE concept: Section 3 defines the concept of maximal description and introduces the lower and upper approximation OE concepts; Section 4 proposes an optimization algorithm for approximation operators and analyzes its time complexity. Section 5 provides experimental validation of the proposed algorithm. Finally, conclusions are drawn in Section 6.

2. Preliminaries

This section will provide an overview of essential definitions and properties involved in this paper.
Definition 1 
([40]). Let L be a lattice. For k 1 , k 2 L , if k 3 L , k 3 = k 1 k 2 , and k 3 0 implies that k 3 = k 1 or k 3 = k 2 , then k 3 is meet-irreducible.
Definition 2 
([41]). A formal context K = ( P , Q , R ) is composed of two sets, P and Q, along with a relation R that links elements of P to those of Q. The elements of P are referred to as the objects, while the elements of Q are known as the attributes.
To indicate that an object p P is related to an attribute q Q , we use the notation p R q or ( p , q ) R .
For an object p P , we denote the object intent { q Q | p R q } simply as p * instead of { p } * . Accordingly, we define q * : = { p P | p R q } to represent the attribute extent of the attribute q.
Let Z P and B Q . The following defines the pair of operators used in formal concept analysis:
Z * = { q | q Q , p Z , p R q } , B * = { p | p P , q B , p R q } .
A formal concept is defined as a pair ( Z , B ) where Z P , B Q , and it satisfies Z * = B and B * = Z . The notation L ( K ) represents the set of all concepts associated with the context K.
We denote the object concept as O C ( K ) = ( p * * , p * ) and the attribute concept as A C ( K ) = ( q * , q * * ) .
Definition 3 
([18]). Let K = ( P , Q , R ) be a formal context with Z P and B Q . The pair of negative operators is defined as follows:
Z * ¯ = { q | q Q , p Z , p R c q } , B * ¯ = { p | p P , q B , p R c q } .
Here, R c = ( P × Q ) R .
Definition 4 
([18]). Let K = ( P , Q , R ) be a formal context and Z P , B Q . The three-way operators are defined as follows:
Z = ( Z * , Z * ¯ ) , B = ( B * , B * ¯ ) .
Dually, for Z 1 , Z 2 P and B 1 , B 2 Q , we define their inverses by
( Z 1 , Z 2 ) = { b Q | b Z 1 * , b Z 2 * ¯ } = Z 1 * Z 2 * ¯ , ( B 1 , B 2 ) = { z P | z B 1 * , z B 2 * ¯ } = B 1 * B 2 * ¯ .
Proposition 1 
([18]). Let K = ( P , Q , R ) be a formal context. For Z , Z 1 , Z 2 P , and B , B 1 , B 2 , C , C 1 , C 2 Q , the following properties hold: 
(1) 
Z Z  
(2) 
( B , C ) ( B , C )
(3) 
Z 1 Z 2 Z 2 Z 1 ,  
(4) 
( B 1 , C 1 ) ( B 2 , C 2 ) ( B 2 , C 2 ) ( B 1 , C 1 ) ,  
(5) 
Z = Z
(6) 
( B , C ) = ( B , C )
(7) 
Z ( B , C ) ( B , C ) Z
(8) 
( Z 1 Z 2 ) = Z 1 Z 2
(9) 
( ( B 1 , C 1 ) ( B 2 , C 2 ) ) = ( B 1 , C 1 ) ( B 2 , C 2 )
(10) 
( Z 1 Z 2 ) Z 1 Z 2
(11) 
( ( B 1 , C 1 ) ( B 2 , C 2 ) ) ( B 1 , C 1 ) ( B 2 , C 2 ) .
Definition 5 
([18]). Let K = ( P , Q , R ) be a formal context. The pair ( Z , ( B , C ) ) is referred to as an object-induced three-way concept (abbreviated as an O E concept) if and only if Z = ( B , C ) and ( B , C ) = Z . The set Z is termed the extension of the concept ( Z , ( B , C ) ) (shortened to O E L P ), and the set of all O E concepts is known as the object-induced three-way concept lattice of K = ( P , Q , R ) , shortened to O E L ( K ) .
We denote the meet-irreducible elements in O E L ( K ) by M ( O E L ) and M P ( O E L ) for the extension of the meet-irreducible elements M ( O E L ) .
We denote the attribute OE concept as ( q * , q * * , q * * ¯ ) or ( q * ¯ , q * ¯ * , q * ¯ * ¯ ) . If ( Z , ( B , C ) ) M ( O E L ) , then a Q , ( Z , ( B , C ) ) = ( q * , q * * , q * * ¯ ) or ( q * ¯ , q * ¯ * , q * ¯ * ¯ ) . That is, the meet-irreducible element in O E L ( K ) must be the attribute OE concept.
Definition 6 
([39]). Let L ( P , Q , R ) be a concept lattice. For Y P and 0 β < 0.5 , a p r β ̲ Y , ( a p r β ̲ Y ) * is referred to as the lower approximation concept, while a p r β ¯ Y , ( a p r β ¯ Y ) * is designated as the upper approximation concept, where the lower approximation of Y is defined as
a p r β ̲ Y = { Z | h ( Y , Z ) β , B Q , ( Z , B ) L ( P , Q , R ) } ,
and the upper approximation of Y is defined as
a p r β ¯ Y = { Z | h ( Y , Z ) < 1 β , B Q , ( Z , B ) L ( P , Q , R ) } .
Here, h ( Y , Z ) = 1 Y Z Y .

3. Three-Way Approximations

Definition 7. 
Let K = ( P , Q , R ) be a formal context. For any y P , the definition of the maximal description of y is given by
M D ( y ) = { Z P | y Z ( Z , ( B , C ) ) M ( O E L ) ( Z 1 , ( B 1 , C 1 ) ) M ( O E L ) ( y Z 1 Z 1 P ( Z , ( B , C ) ) ( Z 1 , ( B 1 , C 1 ) ) ) Z = Z 1 } .
{ Z | Z M D ( y ) } is referred to as the minimal neighborhood of the maximal description of y, denoted as M D ( y ) .
To clarify Definition 7, we illustrate it through Example 1.
Example 1. 
Table 1 presents a formal context K and Figure 1 depicts the details of its O E L ( K ) .
By Definition 1 and Definition 5, we can identify the following meet-irreducible elements in O E L ( K ) :
( y 1 y 2 , ( a b , e ) ) ( y 3 y 4 y 5 y 6 , ( , a ) ) ( y 1 y 2 y 3 , ( b , e ) ) ( y 4 y 5 y 6 , ( , a b ) ) ( y 2 y 3 y 4 y 5 , ( c , ) ) ( y 1 y 6 , ( , c e ) ) ( y 2 y 3 y 6 , ( d , e ) ) ( y 1 y 4 y 5 , ( , d ) ) ( y 5 , ( c e , a b d ) ) ( y 1 y 2 y 3 y 4 y 6 , ( , e ) )
For y 1 P , then
M D ( y 1 ) = ( { y 1 y 4 y 5 } , { y 1 y 2 y 3 y 4 y 6 } ) , M D ( y 1 ) = { y 1 y 4 y 5 } { y 1 y 2 y 3 y 4 y 6 } = { y 1 y 4 } .
The extension of the concept in FCA can be considered analogous to the definable set in a rough set, but they are not precisely equivalent [38]. When a given set of objects serves as the extension of an OE concept, it is deemed definable. Conversely, set that does not correspond to any OE concept’s extension being classified as undefinable. In order to describe an undefinable set of objects Y P by using the extensions in the OE concept lattice, we introduce formal definitions of the lower and upper approximation OE concepts, which are elaborated upon below:
Definition 8. 
Let K = ( P , Q , R ) be a formal context. For a set of objects Y P and 0.5 < α 1 , the lower approximation of Y is defined by
a p r α ̲ Y = { y | c ( Y | M D ( y ) ) α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } ,
and the upper approximation of Y is defined by
a p r α ¯ Y = { y | c ( Y | M D ( y ) ) > 1 α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } ,
where c ( Y | M D ( y ) ) = Y ( M D ( y ) ) M D ( y ) .
We call a p r α ̲ Y , ( ( a p r α ̲ Y ) * , ( a p r α ̲ Y ) * ¯ ) and a p r α ¯ Y , ( ( a p r α ¯ Y ) * , ( a p r α ¯ Y ) * ¯ ) the lower approximation O E concept and upper approximation O E concept, respectively.
Example 2 
(building upon Example 1). Let Y = { y 2 y 4 y 5 } P and α = 0.8 .
The maximal description is as follows:
M D ( y 1 ) = { y 1 y 4 y 5 } , { y 1 y 2 y 3 y 4 y 6 } M D ( y 2 ) = { y 2 y 3 y 4 y 5 } , { y 1 y 2 y 3 y 4 y 6 } M D ( y 3 ) = { y 3 y 4 y 5 y 6 } , { y 2 y 3 y 4 y 5 } , { y 1 y 2 y 3 y 4 y 6 } M D ( y 4 ) = { y 3 y 4 y 5 y 6 } , { y 2 y 3 y 4 y 5 } , { y 1 y 4 y 5 } , { y 1 y 2 y 3 y 4 y 6 } M D ( y 5 ) = { y 3 y 4 y 5 y 6 } , { y 2 y 3 y 4 y 5 } , { y 1 y 4 y 5 } M D ( y 6 ) = { y 3 y 4 y 5 y 6 } , { y 1 y 2 y 3 y 4 y 6 }
The minimal neighborhood of the maximal description is given as follows:
M D ( y 1 ) = { y 1 y 4 } M D ( y 2 ) = { y 2 y 3 y 4 } M D ( y 3 ) = { y 3 y 4 } M D ( y 4 ) = { y 4 } M D ( y 5 ) = { y 4 y 5 } M D ( y 6 ) = { y 3 y 4 y 6 }
and
c ( Y | M D ( y 1 ) ) = 1 2 c ( Y | M D ( y 2 ) ) = 2 3 c ( Y | M D ( y 3 ) ) = 1 2 c ( Y | M D ( y 4 ) ) = 1 c ( Y | M D ( y 5 ) ) = 1 c ( Y | M D ( y 6 ) ) = 1 3
Hence,
a p r 0.8 ̲ Y = ( { y 4 } { y 5 } ) = { y 4 y 5 } ( a p r 0.8 ̲ Y , ( ( a p r 0.8 ̲ Y ) * , ( a p r 0.8 ̲ Y ) * ¯ ) ) = ( y 4 y 5 , ( c , a b d ) )
and
a p r 0.8 ¯ Y = ( { y 1 } { y 2 } { y 3 } { y 4 } { y 5 } { y 6 } ) = { y 1 y 2 y 3 y 4 y 5 y 6 } ( a p r 0.8 ¯ Y , ( ( a p r 0.8 ¯ Y ) * , ( a p r 0.8 ¯ Y ) * ¯ ) ) = ( P , ( , ) ) .
Example 2 illustrates a specific scenario in which the statement “If 0.5 < α 1 , then a p r α ̲ Y Y a p r α ¯ Y ” holds. However, the general applicability of this statement may be questionable. In order to clarify this, Example 3 shows that the aforementioned statement is not universally valid.
Example 3 
(building upon Example 2). Let Y = { x 1 x 2 x 3 x 4 } .
The lower and upper approximation O E concepts are presented below:
a p r 0.8 ̲ Y = ( { y 1 } { y 2 } { y 3 } { y 4 } ) = { y 1 y 2 y 3 y 4 y 6 } , ( a p r 0.8 ̲ Y , ( ( a p r 0.8 ̲ Y ) * , ( a p r 0.8 ̲ Y ) * ¯ ) ) = ( y 1 y 2 y 3 y 4 y 6 , ( , e ) ) ,
and
a p r 0.8 ¯ Y = ( { y 1 } { y 2 } { y 3 } { y 4 } { y 5 } { y 6 } ) = { y 1 y 2 y 3 y 4 y 5 y 6 } ( a p r 0.8 ¯ Y , ( ( a p r 0.8 ¯ Y ) * , ( a p r 0.8 ¯ Y ) * ¯ ) ) = ( P , ( , ) ) .
Thus,
a p r 0.8 ̲ Y Y a p r 0.8 ¯ Y .
Theorem 1. 
For 0.5 < α 1 and Y , Y 1 , Y 2 P , the following properties are valid:
(1) 
a p r α ̲ = ,  
(2) 
a p r α ¯ = ,  
(3) 
a p r α ̲ P = P ,  
(4) 
a p r α ¯ P = P ,  
(5) 
a p r α ̲ Y a p r α ¯ Y ,  
(6) 
If Y 1 Y 2 a p r α ̲ Y 1 a p r α ̲ Y 2 ,  
(7) 
If Y 1 Y 2 a p r α ¯ Y 1 a p r α ¯ Y 2 ,  
(8) 
a p r α ̲ ( Y 1 Y 2 ) ( a p r α ̲ Y 1 a p r α ̲ Y 2 ) ,  
(9) 
a p r α ¯ ( Y 1 Y 2 ) ( a p r α ¯ Y 1 a p r α ¯ Y 2 ) ,  
(10) 
( a p r α ̲ Y 1 a p r α ̲ Y 2 ) a p r α ̲ ( Y 1 Y 2 ) ,  
(11) 
( a p r α ¯ Y 1 a p r α ¯ Y 2 ) a p r α ¯ ( Y 1 Y 2 ) .
Proof. 
(1) Since 0.5 < α 1 and c ( | M D ( y ) ) = 0 , then
a p r α ̲ = { y | c ( | M D ( y ) ) α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } = { y | 0 α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } .
Hence, a p r α ̲ = is proven.  
(2) Similarly to (1), a p r α ¯ = can be proven. 
(3) Since 0.5 < α 1 and c ( P | M D ( y ) ) = 1 , then
a p r α ̲ P = { y | c ( P | M D ( y ) ) α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } = { y | 1 α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } .
Hence, a p r α ̲ P = P is proven.  
(4) Similarly to (3), we can prove that a p r α ¯ P = P
(5) By Definition 8, a p r α ̲ Y a p r α ¯ Y holds.  
(6) Since Y 1 Y 2 , then c ( Y 2 | M D ( y ) ) c ( Y 1 | M D ( y ) ) . By Definition 8, a p r α ̲ Y 1 a p r α ̲ Y 2 is proven. 
(7) Similarly to (6), a p r α ¯ Y 1 a p r α ¯ Y 2 can be proven. 
(8) Since ( Y 1 Y 2 ) Y 1 and ( Y 1 Y 2 ) Y 2 , then
c ( Y 1 | M D ( y ) ) c ( ( Y 1 Y 2 ) | M D ( y ) ) , c ( Y 2 | M D ( y ) c ( ( Y 1 Y 2 ) | M D ( y ) ) ,
We then obtain a p r α ̲ ( Y 1 Y 2 ) a p r α ̲ Y 1 , a p r α ̲ ( Y 1 Y 2 ) a p r α ̲ Y 2 . Therefore, a p r α ̲ ( Y 1 Y 2 ) ( a p r α ̲ Y 1 a p r α ̲ Y 2 ) can be proven. 
(9) Similarly to (8), a p r α ¯ ( Y 1 Y 2 ) ( a p r α ¯ Y 1 a p r α ¯ Y 2 ) can be proven. 
(10) According to (6), since Y 1 ( Y 1 Y 2 ) , Y 2 ( Y 1 Y 2 ), then a p r α ̲ Y 1 a p r α ̲ ( Y 1 Y 2 ) and a p r α ̲ Y 2 a p r α ̲ ( Y 1 Y 2 ) . Therefore, ( a p r α ̲ Y 1 a p r α ̲ Y 2 ) a p r α ̲ ( Y 1 Y 2 ) can be proven. 
(11) The same as (10), ( a p r α ¯ Y 1 a p r α ¯ Y 2 ) a p r α ¯ ( Y 1 Y 2 ) can be proven.    □
Subsequently, we discuss further properties of varying thresholds for the lower and upper approximations.
Theorem 2. 
Let K = ( P , Q , R ) be a formal context. For Y P and 0.5 < α 1 < α 2 1 , the following properties are true: 
(1) 
a p r α 2 ̲ Y a p r α 1 ̲ Y ,  
(2) 
a p r α 1 ¯ Y a p r α 2 ¯ Y .
Proof. 
(1) According to Definition 8, since 0.5 < α 1 < α 2 1 , then c ( Y | M D ( y ) ) α 2 and c ( Y | M D ( y ) ) α 1 . Therefore, a p r α 2 ̲ Y a p r α 1 ̲ Y is proven. 
(2) Since 0.5 < α 1 < α 2 1 , we then obtain 1 α 2 < 1 α 1 ; then, c ( Y | M D ( y ) ) > 1 α 2 and c ( Y | M D ( y ) ) > 1 α 1 . Therefore, a p r a 1 ¯ Y a p r a 2 ¯ Y is proven.    □
Theorem 3. 
Let K = ( P , Q , R ) be a formal context. For Y P and 0.5 < α 1 , then
( 1 ) a p r α ̲ ( Y c ) = { y | c Y c | M D ( y ) > 1 α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } , ( 2 ) a p r α ¯ ( Y c ) = { y | c Y | M D ( y ) < α , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } .
where Y c denotes the complement of Y.
Proof. 
For y P , it holds that
c ( Y c | M D ( y ) ) = 1 c ( Y | M D ( y ) ) .
(1)
If c ( Y c | M D ( y ) ) α , then 1 c ( Y | M D ( y ) ) α , which is equivalent to c ( Y | M D ( y ) ) 1 α . Hence, (1) can be proven.
(2)
Similarly, if c ( Y c | M D ( y ) ) > 1 α , then c ( Y c | M D ( y ) ) > 1 α , which is equivalent to c ( Y | M D ( y ) ) < α . Thus (2) can be proved.    □
According to Definition 8, the establishment of the concept model introduced in this paper depends on the parameter α . As a special case, we will examine the unique characteristics of the proposed model when α = 1 .
Theorem 4. 
Let O E L ( P , Q , R ) be an O E concept lattice. For a subset of objects Y P , if α = 1 , then
(1) 
a p r 1 ̲ Y = { y | M D ( y ) Y , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } ,  
(2) 
a p r 1 ¯ Y = { y | Y ( M D ( y ) ) , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } .
Proof. 
(1) For α = 1 , since c ( Y | M D ( y ) ) 1 and c ( Y | M D ( y ) ) [ 0 , 1 ] , we get Y ( M D ( y ) ) = M D ( y ) ; thus, M D ( y ) Y . By Definition 8, then
a p r 1 ̲ Y = { y c ( Y M D ( y ) ) 1 , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } = { y M D ( y ) Y , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) }
(2) Similarly, a p r 1 ¯ Y = { y | Y ( M D ( y ) ) , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } can be proven.    □
Assume that 0.5 < α 1 ; then, a p r 1 ̲ Y a p r α ̲ Y and a p r α ¯ Y a p r 1 ¯ Y hold, which is consistent with Theorem 2.
Theorem 5. 
Let O E L ( K ) be an O E concept lattice. For Y P , then a p r 1 ̲ Y Y .
Proof. 
Let y j P , E j = { y j } , j = 1 , 2 , , | P | , E = ( E j ) . By Theorem 4, then
a p r 1 ̲ Y = { y | ( M D ( y ) Y ) , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } .
Then, E j = { y j } Y . By Proposition 1(3) and Proposition 1(4), E j Y
According to Propositon 1(1), it can easily seen that E j E j ; then, E j E j Y , ( E j ) Y . Furthermore, by Proposition 1(3) and Proposition 1(5), Y = Y ( E j ) , ( E j ) = E Y . Therefore, a p r 1 ̲ Y Y .    □
Theorem 6. 
Let O E L ( P , Q , R ) be an O E concept lattice. For Y P , then Y a p r 1 ¯ Y .
Proof. 
Let y j P , E j = { y j } , j = 1 , 2 , , | P | . By Theorem 4, we obtain
a p r 1 ¯ Y = { y | Y ( M D ( y ) ) , y Z , Z P , B , C Q , ( Z , ( B , C ) ) M ( O E L ) } ,
Then, Y E = { y j } . By Proposition 1(3) and Proposition 1(4), we have Y E . Therefore, Y a p r 1 ¯ Y .   □

4. Approximating Undefinable Sets in a Three-Way Concept

In this section, we present an optimization algorithm (three-way approximation optimization algorithm, TAO) to approximate undefinable sets.

4.1. Proposed Algorithm

The purpose of the TAO algorithm is to identify the lower and upper approximations for an undefinable set of objects. Initially, a maximal description is established on the basis of meet-irreducible elements to create its minimal neighborhood set. Afterwards, the covering-based rough set is employed to derive the the lower and upper approximations.
The TAO algorithm is made up of three parts: Algorithm 1, which is tasked to find the minimal neighborhood of the maximal description; Algorithm 2, which computes the lower approximation; and Algorithm 3, which calculates the upper approximation.
Reducing redundant information is a key objective of the TAO algorithm. To gauge the effectiveness of the TAO algorithm in handling redundant information, we employ the redundancy ratio and reduction ratio for quantitative measurement.
Definition 9 
(redundancy ratio).
R l u = λ R l o w e r + ( 1 λ ) R u p p e r .
where
R u p p e r = | a p r α ¯ ( Y ) Y | | P | , R l o w e r = | a p r α ̲ ( Y ) Y | | P | and λ [ 0 , 1 ] .
R u p p e r , the upper approximation redundancy ratio, measures the diversity between the upper approximation and the undefinable set Y. R l o w e r , the lower approximation redundancy ratio, measures the difference between the lower approximation and the undefinable set Y. Moreover, a higher R l u indicates stronger data redundancy, which may degrade the algorithm’s performance. Additionally, when λ = 1 , R l u depends entirely on R l o w e r , and when λ = 0 , R l u depends entirely on R u p p e r .
Definition 10 
(reduction ratio, RR).
R R = 1 C 1 C 2 .
The reduction ratio ( R R ) serves to evaluate the comparative efficacy of concept reduction between two algorithms. Here, C 1 denotes the count of concepts generated by Algorithm 1 and C 2 represents the count of concepts generated by Algorithm 2. Mathematically, the value of R R lies within the interval [ 0 , 1 ) . A higher reduction ratio implies that Algorithm 1 outperforms Algorithm 2 in generating fewer concepts. Specifically, when C 1 = C 2 , this indicates that Algorithm 1 exhibits no reduction effect compared to Algorithm 2, resulting in R R = 0 . Conversely, as R R approaches 1, this signifies that Algorithm 1 achieves a substantial reduction in the number of concepts, highlighting its superior concept reduction capability.
Algorithm 1 Determining the minimal neighborhood of the maximal description
Input: the formal context K = ( P , Q , R )
Output:  M D ( y j )
  1:
while  y j P , ( j = 1 , 2 , , | P | )  do
  2:
   according to Definition 5, find out the set of extension of meet-irreducible elements M ( O E L ) , denote it as M P ( O E L ) .
  3:
end while
  4:
while  y j P , ( j = 1 , 2 , , | P | ) do
  5:
   if  y j Z s , Z t , Z s , Z t P  then
  6:
     for  Z s M P ( O E L ) , ( s = 1 , 2 , , m )  do
  7:
        for  Z t M P ( O E L ) , ( t = 1 , 2 , , m )  do
  8:
          Compute the description M D ( y j ) by Equation (1).
  9:
        end for
10:
     end for
11:
   end if
12:
end while
13:
Generate the minimal neighborhood of the maximal description M D ( y j ) .
14:
return  M D ( y j )
Algorithm 2 Computing the lower approximation
Input:  M D ( y j ) , Y = { y 1 y n } , n | P | , E j = { y j } , threshold α , r 0 = c ( Y | M D ( y 0 ) ) = 1
Output:  a p r α ̲ Y = E .
  1:
j = 1 , E = , r j = r 0 .
  2:
if  j < | P | + 1  then
  3:
   if  r j α  then
  4:
      E = E j , j = j + 1
  5:
   else
  6:
      E = E , j = j + 1
  7:
   end if
  8:
else
  9:
    a p r α ̲ Y = E
10:
end if
11:
return  a p r α ̲ Y = E
Algorithm 3 Calculating the upper approximation
Input:  M D ( x j ) , Y = { y 1 y n } , n | P | , E j = { y j } ,threshold α , r 0 = c ( Y | M D ( y 0 ) ) = 1
Output:  a p r α ¯ Y = E
  1:
j = 1 , E = , r j = r 0 .
  2:
if  j < | U | + 1  then
  3:
   if  r j > 1 α  then
  4:
      E = E j , j = j + 1
  5:
   else
  6:
      E = E , j = j + 1
  7:
   end if
  8:
else
  9:
    a p r α ¯ Y = E
10:
end if
11:
return  a p r α ¯ Y = E

4.2. Algorithm Complexity Analysis

Let K = ( P , Q , R ) be a formal context. While meet-irreducible elements are classified as attribute concepts, not all attribute concepts are necessarily meet-irreducible elements [40]; therefore, | M P ( O E L ) | 2 | Q | . Since the statement that object OE concepts are indeed OE concepts holds true, | P | | O E L ( K ) | 2 | P | .
The primary operations of Procedure 1 are divided into two components: finding M P ( O E L ) and generating M D ( y j ) . The time complexity for the first component is O ( 2 | Q | ) and the time complexity for the second component is O ( | P | | M p ( O E L ) | 2 + | P | | M P ( O E L ) | ) . Therefore, the time complexity of the whole of Procedure 1 is O ( 2 | Q | + | P | | M P ( O E L ) | 2 + | P | | M P ( O E L ) | ) = O ( | P | | M P ( O E L ) | 2 ) = O ( 4 | P | | Q | 2 ) . The primary operations of Procedure 2 are dedicated to describing the lower approximation, with the overall time complexity of Procedure 2 being O ( | P | ) . Likewise, the complexity analysis of Procedure 3 parallels that of Procedure 2 and is not detailed here; thus, the time complexity of Procedure 3 also remains as O ( | P | ) . As a result, the overall time complexity of the entire algorithm is O ( 4 | P | | Q | 2 + 2 | P | ) .
The method in [39] demonstrates effective results in computing lower and upper approximations in classical concepts and can also be expanded to the three-way concept. The time complexity of the method [39] is composed of two parts: constructing all OE concepts and computing lower and upper approximations. This method involves creating O E L ( K ) and subsequently performing a one-by-one comparison for all OE concepts. The set of all OE concepts, denoted as O E L ( K ) , is calculated by reference [22], with the time complexity of constructing O E L ( K ) represented as O ( | O E L ( K ) | | P | | Q | 2 ) . The second part could require, at most, O ( 2 | O E L ( K ) | ) of time. Consequently, the overall time complexity of the method in [39] is O ( | O E L ( K ) | | P | | Q | 2 + 2 | O E L ( K ) | ) .
The TAO algorithm offers the advantage of efficiently calculating the lower and upper approximations for large-scale formal contexts. Therefore, in such contexts, it is usually observed that | O E L ( K ) | 4 . As a result, the TAO algorithm exhibits lower complexity compared to the method in [39].

5. Experimental Results and Analysis

In order to verify the validity of the TAO algorithm, we compared it with the functionally similar method described in reference [39].

5.1. Datasets and Experimental Environment

We utilized five datasets (Surveillance, Wpbc, Wdbc, Sonar, CKD) from the UCI Machine Learning Repository (https://archive.ics.uci.edu/, accessed on 20 February 2025). Due to constraints regarding running time and computer memory, we selected only partial data from the original datasets as formal contexts for the experiments. To ensure the scientific validity and reproducibility of the algorithm, we employed random sampling of the datasets, with the specific details presented in Table 2. For the Surveillance, Wpbc, Wdbc, Sonar, and CKD datasets, the attribute value, which could be normalized to the interval [ 0 , 1 ] using the Min–Max normalization method, was 0 if it was less than the object’s average across all attributes and 1 otherwise. The experimental environment consisted of a 64-bit Windows 11 operating system, an Intel Core i7-12700 2.10 GHz CPU, 16 GB of RAM, and Python 3.13.

5.2. Experimental Evaluation and Analysis

This subsection presents and analyzes the experimental results. To ensure the accuracy of the outcomes, each experiment was repeated 10 times, and the average values were recorded. The experiments measured the number of elements in the lower approximation and upper approximation and the memory usage for both the TAO algorithm and the method proposed in [39] across various thresholds, as detailed in Table 3, where L 1 and U 1 denote the lower and upper approximations derived from TAO, while L 2 and U 2 correspond to those of the method in [39]. Table 3 reveals that the memory usage of TAO is consistently much lower than that of the method in [39].
We applied the datasets to compare TAO with the method in reference [39]. The performance of these algorithms was evaluated by the redundancy ratio (with λ = 0.5 for balanced weighting), running time, and memory usage. Figure 2 presents a comparative analysis of the redundancy ratios between TAO and the method proposed in [39] across five datasets: Surveillance, Wpbc, Wdbc, Sonar, and CKD. Evidently, TAO exhibited a consistently lower redundancy ratio than the comparative method, underscoring its enhanced efficacy in reducing redundancy. Notably, for the Wpbc dataset, while the redundancy ratios coincided at α = 0.6 and α = 0.7 for both methods, TAO maintained superiority under other threshold values. Collectively, these results demonstrate that TAO outperforms the method [39] across diverse threshold settings. Figure 3 illustrates the running times of TAO and the method in [39] across the same datasets. The outcomes reveal that TAO achieves shorter execution durations on all evaluated datasets, confirming its superior computational efficiency.
TAO demonstrated significant advantages over the method in [39] in terms of redundancy ratio (Figure 2), running time (Figure 3), and memory usage (Table 3). Consequently, it is evident that TAO’s performance is superior.
The selected concept counts are presented in Table 4, where C 1 represents TAO’s counts and C 2 denotes those of the method in [39]. As shown, TAO not only yielded fewer selected concepts but also exhibited higher certainty compared to the approach in [39], with a lower redundancy ratio, running time, and memory usage. Thus, TAO’s selected concepts offer better representation and effectiveness.
Overall, the experimental results demonstrate that TAO significantly outperforms the method in [39] in terms of redundancy ratio, running time, and memory usage. Compared to the approach in [39], TAO leverages a computing process that reduces the number of selected concepts, which are generally fewer. This reduction directly enhances computing efficiency and decreases memory usage.

6. Conclusions

In this study, we introduced novel lower and upper approximations by using meet-irreducible elements to accurately describe undefinable sets and presented the TAO algorithm, which reduces the counting overhead involved in approximating undefinable sets compared to the method in reference [39], which requires the construction of OE concept lattices. The experimental results show that the TAO algorithm outperforms existing methods in terms of redundancy rate, running time, and memory usage.
In our future research, we will focus on the following two directions: (1) The work presented in this paper is grounded in OE concepts; considering the duality between OE concepts and AE concepts, we will further explore the approximation characterization of undefinable sets of attributes based on AE concepts. (2) The study was conducted in complete formal contexts, while incomplete formal contexts are quite common in practical applications. Therefore, our future research will involve studying the approximation of undefinable object sets in incomplete formal contexts.

Author Contributions

Conceptualization, Writing—review, R.W.; Conceptualization, Validation, and Writing—original draft and editing, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding through grants from the Ningxia Scientific and Technological Leading Talent Project (2022GKLRLX08).

Data Availability Statement

The experimental datasets were obtained from the UCI Machine Learning Repository (https://archive.ics.uci.edu/, accessed on 20 February 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pawlak, Z. Rough set. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Stepaniuk, J.; Skowron, A. Three-way approximation of decision granules based on the rough set approach. Int. J. Approx. Reason. 2023, 155, 1–16. [Google Scholar] [CrossRef]
  3. Janusz, A.; Ślęzak, D. Rough set methods for attribute clustering and selection. Appl. Artif. Intell. 2014, 28, 220–242. [Google Scholar] [CrossRef]
  4. Siminski, R.; Wakulicz-Deja, A. Rough sets inspired extension of forward inference algorithm. In Proceedings of the 24th International Workshop on Concurrency, Specification and Programming (CS & P 2015), Rzeszow, Poland, 28–30 September 2015; Volume 2, pp. 161–172. [Google Scholar]
  5. Shao, M.W.; Wu, Z.W.; Wang, C.Z. Knowledge reduction methods of covering approximate spaces based on concept lattice. Knowl.-Based Syst. 2020, 191, 105269. [Google Scholar] [CrossRef]
  6. Zakowski, W. Approximations in the Space (U, π). Demonstr. Math. 1983, 16, 761–770. [Google Scholar] [CrossRef]
  7. Wang, J.Q.; Zhang, X.H. Matrix approaches for some issues about minimal and maximal descriptions in covering-based rough sets. Int. J. Approx. Reason. 2019, 104, 126–143. [Google Scholar] [CrossRef]
  8. Zhu, W. Relationship between generalized rough sets based on binary relation and covering. Inf. Sci. 2009, 179, 210–225. [Google Scholar] [CrossRef]
  9. Zhang, Y.L.; Luo, M.K. Relationships between covering-based rough sets and relation-based rough sets. Inf. Sci. 2013, 225, 55–71. [Google Scholar] [CrossRef]
  10. Yao, Y.Y.; Yao, B.X. Covering based rough set approximations. Inf. Sci. 2012, 200, 91–107. [Google Scholar] [CrossRef]
  11. Janicki, R.; Lenarčič, A. Optimal approximations with rough sets. Lect. Notes Comput. SC 2013, 8171, 87–98. [Google Scholar]
  12. Wille, R. Restructeing lattices theory: An approach based on hierarchies of concepts. Ordered Sets. 1982, 83, 445–470. [Google Scholar]
  13. Alwersh, M.; Kovács, L. K-Means extensions for clustering categorical data on concept lattice. Int. J. Adv. Comput. SC 2023, 14, 492–507. [Google Scholar] [CrossRef]
  14. Ibrahim, M.H.; Missaoui, R. Mining actionable concepts in concept lattice using interestingness propagation. J. Comput. Sci. 2024, 75, 102196. [Google Scholar] [CrossRef]
  15. Shao, M.W.; Hu, Z.Y.; Wu, W.Z.; Liu, H. Graph neural networks induced by concept lattices for classification. Int. J. Approx. Reason. 2023, 154, 262–276. [Google Scholar] [CrossRef]
  16. Nikolay, B.; Mustafa, M.; Nurakunov, A. On Concept Lattices for Numberings. Tsinghua Sci. Technol. 2024, 29, 1642–1650. [Google Scholar] [CrossRef]
  17. Pang, J.Z.; Zhang, B.; Chen, M.H. A novel L-fuzzy concept learning via two-way concept-cognitive learning and residuated implication. Int. J. Fuzzy Syst. 2024, 26, 2783–2804. [Google Scholar] [CrossRef]
  18. Qi, J.J.; Wei, L.; Yao, Y.Y. Three-Way formal concept analysis. RSKT 2014, 8818, 732–741. [Google Scholar]
  19. Zhang, Z.; Jin, Y.H.; Zhao, X.C.; Ba, Y.J. A fast algorithm for three-Way object-oriented concept acquisition. Appl. Sci. 2025, 15, 6486. [Google Scholar] [CrossRef]
  20. Xie, J.P.; Yang, J.; Li, J.H.; He, M.W.; Song, H.X. Three-way concept lattice construction and association rule acquisition. Inf. Sci. 2025, 701, 121867. [Google Scholar] [CrossRef]
  21. Hao, F.; Yang, Y.X.; Min, G.Y.; Loia, V. Incremental construction of three-way concept lattice for knowledge discovery in social networks. Inf. Sci. 2021, 578, 257–280. [Google Scholar] [CrossRef]
  22. Hu, Q.; Zhang, J.; Mi, J.S.; Yuan, Z.; Li, M.Z. TIEOD: Three-way concept-based information entropy for outlier detection. Appl. Soft. Comput. 2025, 170, 112642. [Google Scholar] [CrossRef]
  23. Zhao, L.; Lu, L.X.; Yao, W. Generalized three-way formal concept lattices. Soft Comput. 2023, 27, 11219–11226. [Google Scholar] [CrossRef]
  24. Long, B.H.; Deng, T.Q.; Yao, Y.Y.; Xu, W.H. Three-way concept lattice from adjunctive positive and negative concepts. Int. J. Approx. Reason. 2024, 174, 109272. [Google Scholar] [CrossRef]
  25. Zhang, J.; Hu, Q.; Mi, J.S.; Fu, C. Hesitant fuzzy three-way concept lattice and its attribute reduction. Appl. Intell. 2024, 54, 2445–2457. [Google Scholar] [CrossRef]
  26. Liu, M.; Zhu, P. Fuzzy object-induced network three-way concept lattice and its attribute reduction. Int. J. Appox. Reason. 2024, 173, 109251. [Google Scholar] [CrossRef]
  27. Zhang, C.L.; Li, J.J.; Lin, Y.D. Matrix-based reduction approach for one-sided fuzzy three-way concept lattices. Intell. Fuzzy Syst. 2021, 40, 11393–11410. [Google Scholar] [CrossRef]
  28. Hu, Z.Y.; Shao, M.W.; Mi, J.S.; Wu, W.Z. Mining positive and negative rules via one-sided fuzzy three-way concept lattices. Fuzzy Set Syst. 2024, 479, 108842. [Google Scholar] [CrossRef]
  29. Wei, L.; Liu, L.; Qi, J.J.; Qian, T. Rules acquisition of formal decision contexts based on three-way concept lattices. Inf. Sci. 2020, 516, 529–544. [Google Scholar] [CrossRef]
  30. Zhao, J.; Wan, R.X.; Miao, D.Q.; Zhang, B.Y. Rule acquisition of three-way sem-concept lattices in formal decision context. CAAI Trans. Intell. Technol. 2023, 9, 333–347. [Google Scholar] [CrossRef]
  31. Ren, R.S.; Wei, L.; Qi, J.J.; Wei, X.S. Three-way conceptual knowledge updating in incomplete contexts. Int. J. Approx. Reason. 2024, 175, 109299. [Google Scholar] [CrossRef]
  32. Guo, D.D.; Xu, W.H.; Ding, W.P.; Yao, Y.Y.; Wang, X.Z.; Pedrycz, W.; Qian, Y.H. Concept-cognitive learning survey: Mining and fusing knowledge from data. Inf. Fusion 2024, 109, 102426. [Google Scholar] [CrossRef]
  33. Wang, Z.; Qi, J.J.; Shi, C.J.; Ren, R.S.; Wei, L. Multiview granular data analytics based on three-way concept analysis. Appl. Intell. 2023, 53, 14645–14667. [Google Scholar] [CrossRef]
  34. Tong, S.R.; Sun, B.Z.; Zhang, L.; Chu, X.L. An approach of multi-criteria group decision making with incomplete information based on formal concept analysis and rough set. Expert Syst. Appl. 2024, 248, 123364. [Google Scholar] [CrossRef]
  35. Zhang, R.L.; Xiong, S.W.; Chen, Z. Construction method of concept lattice based on improved variable precision rough set. Neurocomputing 2016, 188, 326–338. [Google Scholar] [CrossRef]
  36. Wang, G.; Mao, H.; Liu, C.; Zhang, Z.M.; Yang, L.Z. Rough set approximations based on a matroidal structure over three sets. Appl. Intell. 2023, 53, 13082–13109. [Google Scholar] [CrossRef] [PubMed]
  37. Yao, Y.Y. Concept lattices in rough set theory. In Proceedings of the IEEE Annual Meeting of the Fuzzy Information, 2004. Processing NAFIPS ’04, Banff, AB, Canada, 27–30 June 2004. [Google Scholar]
  38. Mohanty, D. Rough set on concept lattice. Comput. Eng. Intell. Syst. 2012, 3, 46–54. [Google Scholar]
  39. Mao, H.; Kang, R. Variable precision rough set approximations in concept lattice. Prog. Res. Math. 2015, 2, 47–56. [Google Scholar]
  40. Davey, B.A.; Priestley, H.A. Introduction to Lattices and Order, 2nd ed.; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  41. Ganter, B. Formal Concept Analysis: Mathematical Foundations; Springer: New York, NY, USA, 1999. [Google Scholar]
Figure 1. The O E L ( K ) in Example 1.
Figure 1. The O E L ( K ) in Example 1.
Axioms 14 00721 g001
Figure 2. Comparison of redundancy ratio with different thresholds. Source: Mao and Kang (2015) [39].
Figure 2. Comparison of redundancy ratio with different thresholds. Source: Mao and Kang (2015) [39].
Axioms 14 00721 g002
Figure 3. Comparison of running times with different thresholds for α . Source: Mao and Kang (2015) [39].
Figure 3. Comparison of running times with different thresholds for α . Source: Mao and Kang (2015) [39].
Axioms 14 00721 g003
Table 1. The formal context ( P , Q , R ) .
Table 1. The formal context ( P , Q , R ) .
abcde
y 1
y 2
y 3
y 4
y 5
y 6
Table 2. Description of the datasets.
Table 2. Description of the datasets.
DatasetsObjectsAttributes
1Surveillance147
2Wpbc9910
3Wdbc14915
4Sonar7915
5CKD23224
Table 3. Performance comparison with different thresholds for α .
Table 3. Performance comparison with different thresholds for α .
Datasets α TAOThe Method in [39]
L 1 U 1 Memory (MB) L 2 U 2 Memory (MB)
Surveillance0.63100.3015141410349.0580
0.73140.3015141410349.0604
0.83140.3016141410349.0586
0.93140.3016141410349.0587
1.03140.3015141410349.0576
Wpbc0.699990.6436999911999.6610
0.798990.6437999911999.6803
0.815990.6442999911999.6889
0.90990.6436999911999.6736
1.00990.6437999911999.7292
Wdbc0.6721490.862414914911807.6815
0.7291490.864314914911808.1439
0.8281490.865914914911808.1454
0.971490.873314714911808.1494
1.071490.874314714911808.1455
Sonar0.613190.4446497911850.6182
0.713470.4445427911850.8190
0.813510.4445427911850.9866
0.913510.4446427911851.4954
1.013510.4446427911851.4988
CKD0.61962320.743923223212903.0880
0.71962320.743923223212906.9141
0.81962320.744323223212921.5596
0.91962320.744423223212940.7803
1.01962320.744423223212943.3284
Table 4. Comparison of selected concept counts.
Table 4. Comparison of selected concept counts.
Dataset C 1 C 2 Reduction Ratio
Surveillance14103 86.4078 %
Wpbc99980 89.8979 %
Wdbc1495392 97.2366 %
Sonar798179 99.0341 %
CKD232601,999 99.9615 %
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Wan, R. Three-Way Approximations with Covering-Based Rough Set. Axioms 2025, 14, 721. https://doi.org/10.3390/axioms14100721

AMA Style

Li M, Wan R. Three-Way Approximations with Covering-Based Rough Set. Axioms. 2025; 14(10):721. https://doi.org/10.3390/axioms14100721

Chicago/Turabian Style

Li, Mei, and Renxia Wan. 2025. "Three-Way Approximations with Covering-Based Rough Set" Axioms 14, no. 10: 721. https://doi.org/10.3390/axioms14100721

APA Style

Li, M., & Wan, R. (2025). Three-Way Approximations with Covering-Based Rough Set. Axioms, 14(10), 721. https://doi.org/10.3390/axioms14100721

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop