Next Article in Journal
Reliability Evaluation of Two-Stage Uncertain Multi-State Weighted k-Out-of-n Systems
Previous Article in Journal
Neutron Stars in the Theory of Gravity with Non-Minimal Derivative Coupling and Realistic Equations of State
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hypergraph-Based Approach to Attribute Reduction in an Incomplete Decision System

School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 911; https://doi.org/10.3390/sym17060911
Submission received: 13 May 2025 / Revised: 1 June 2025 / Accepted: 6 June 2025 / Published: 9 June 2025
(This article belongs to the Section Mathematics)

Abstract

Attribute reduction has been demonstrated to be an effective approach for addressing fuzziness and uncertainty in data analysis, especially for data dimension reduction. As an extension of graphs, hypergraphs have been established by prior research as a potent mathematical framework for attribute reduction in decision systems. However, current studies rarely explore the integration of hypergraphs and rough set theories for attribute reduction in incomplete decision systems. To bridge this theoretical gap, this paper proposes a novel hypergraph-based attribute reduction method for incomplete decision systems through a matrix. Firstly, we introduce two types of construction methods for the characteristic matrices of a hypergraph, and the characteristic matrix decomposition relationship between them is examined. Moreover, some features in hypergraphs including transversal are systematically investigated via these characteristic matrices. Secondly, using the characteristic matrices of the hypergraphs derived from an incomplete information system, a hypergraph-based method is developed for the process of attribute reduction in incomplete information systems via a discernibility matrix. Finally, we discuss the attribute reduction method of incomplete decision systems, and establish a new judgment method for the attribute reduction in incomplete decision systems through the constructed characteristic matrices of hypergraphs.

1. Introduction

In many data mining processes, there is often a challenge of high data dimensionality. The biggest challenge of high dimensionality is that thousands of features significantly increase the complexity of data mining [1]. Therefore, dimensionality reduction methods are often used as a data preprocessing step to improve recognition accuracy. This process is known as feature selection. Feature selection based on rough set theory [2], also called attribute reduction, is to reduce redundant features to find a minimum feature subset, which provides the same classification information as the original feature set and has been successfully applied to machine learning and pattern recognition. Simultaneously, the upper and lower approximation sets, which are the core concept of classical rough sets, are used to describe positive regions. Therefore, how to reform the method of calculating approximate sets to improve the efficiency of attribute reduction has become an important research topic for scholars [3]. In fact, all calculations in the rough set models, such as feature selection and rule extraction, are almost all related to the calculation of the positive domain. Obviously, the method for calculating the approximate set is a very important research topic on rough sets.
The classical rough set theory has been proved to be effective in discovering knowledge rules from complete decision systems (CDSs). Since the classic rough set is based on the equivalence relation, this limits its wide application. In order to strengthen the performance of the classic rough sets’ algorithm, many scholars have combined rough sets with other theories and obtained some research results [4,5]. Unfortunately, the classical rough set theory is not suitable for incomplete decision systems (IDSs). However, in real life, there exist incomplete information systems (IISs) with missing values [6,7]. This also urges scholars to explore new methods of attribute reduction to meet the actual needs [8,9].
Feature-selection-based rough sets have been discussed over recent years [10,11]. For example, using similarity relations, Kryszkiewicz [12] presented a method for eliminating knowledge or attributes from an IDS and proposed a new method for calculating all optimal certain rules from IDSs. Following such work, the notion of a maximal consistent block is proposed to deal with the problem of knowledge discovery in IDSs [13]. Furthermore, a rough set-based method for knowledge acquisition in IDSs was also obtained in [14]. Based on the maximal consistent block, Qian et al. [15] also defined and proposed an extension of combinatorial entropy for the combination entropy or mutual information. Based on Kryszkiewicz’s work, Wu et al. studied attribute reduction in IDSs from the viewpoint of the Dempster–Shafer theory of evidence and presented the relationships among classical reducts, belief reducts and plausibility reducts [16]. Yang et al. proposed a neighborhood-system-based method for the incomplete information system [17]. Integrating the attribute measure based on the indiscernibility relation and discernibility relation, Shu et al. [18] developed two efficient attribute reduction algorithms for computing reductions in an IIS.
In recent years, the research on feature selection based on matrix and graph methods has attracted a lot of attention [19]. Wang et al. [20] studied the relationship between matroids and rough sets through graphs and matrices, and gave the graph representation of lower and upper approximations. In [21,22], Chen et al. studied the problem of attribute reduction in classic rough sets through the vertex cover of the graph and discussed the attribution reduction method based on graphs. Moreover, based on the hypergraph model, the attribution reduction method of the covering decision system was studied. Tan and Li studied reduction in covering decision information systems by matrix-based methods and the notions of minimal and maximal descriptions in covering decision information systems were proposed in [23]. Based on covering rough sets, Yang et al. [24] proposed a heuristic graph-based description vector algorithm for feature selection in a covering decision system. Mao et al. studied the attribute problem in a formal context by applying diagrams of hypergraphs, thus realizing the visualization of attribute reduction [25]. In addition, matrices have been used in the theoretical basis and application framework of rough sets, such as characteristic matrices [26,27] and discernibility matrices [28]. It can be seen that the technology of graphs and matrices is widely used and attracts more and more attention [29,30]. In a framework, it is very interesting and useful to study the feature selection of rough sets through graph and matrix methods. However, there are few studies to solve attribute reduction in IDSs based on matrices and graphs. This prompted us to explore the properties of IDSs from the perspective of matrices and graphs and propose a novel method for the attribute reduction of IDSs based on matrices and graphs.
In this paper, we construct the matrix representation of hypergraphs and apply them to the attribute reduction of IDSs. The main contributions of this study are summarized as follows: (1) Two types of characteristic matrices of a hypergraph and the relationship between them is proven, where one of the characteristic matrices is the transpose product of the other characteristic matrix. Through matrix operations, key features of hypergraph are systematically characterized. (2) A hypergraph-based method is established for the attribute reduction in an IIS via the discernibility matrix derived from characteristic matrices of hypergraphs. (3) A novel judgment method is given for the attribute reduction in both consistent and inconsistent IDSs using characteristic matrices of hypergraphs.
This paper is organized as follows. Section 2 reviews some basic concepts related to incomplete information systems and hypergraphs. In Section 3, the characteristic matrices of hypergraphs are constructed and some features of hypergraphs are given by matrix representation. In Section 4, the characteristics matrices of hypergraphs are employed to the attribute reduction in incomplete decision systems. A new judgment method for attribute reduction in IISs based on the constructed characteristic matrix is proposed. Section 5 concludes this paper.

2. Preliminaries

In this section, we briefly review several basic concepts, such as incomplete information systems, hypergraphs and so on.

2.1. Incomplete Information Systems

Firstly, we will introduce the concepts of information system and incomplete information system.
Definition 1
([2]). An information system (IS) is defined as a 4-tuple S = ( U , A T , V , f ) , where U is a nonempty finite set of objects, i.e., the universe of discourse and A T is a nonempty finite set of attributes. V = a A T V a , V a is a domain of the attribute a, and f : U × A T V is a function such that such that f ( x , a ) V a for any a A T , x U .
For any subset of attributes A A T , A can induce an indiscernibility relation I N D ( A ) , where I N D ( A ) = { ( x , y ) U × U | a A , f ( x , a ) = f ( y , a ) } . Obviously, I N D ( A ) determines an equivalence relation U, which forms a partition denoted as U / I N D ( A ) = { [ x ] A | x U } , where [ x ] A = { y U | ( x , y ) I N D ( A ) } .
In practical applications, it is common to encounter datasets that contain missing information, which hinders the ability to analyze the relationships among objects within a comprehensive framework. To solve such a situation, a null value denoted by ∗ is usually introduced to those attributes. For an information system S = ( U , A , V , f ) , if x U , a A T exist such that f ( x , a ) = , then the information system S is called an incomplete information system (IIS).
For an IIS, the tolerance relation is introduced to handle incomplete information systems [6].
Definition 2
([6]). Suppose S = ( U , A T , V , f ) is an IIS. For any A A T , A determines a binary relation denoted by S I M ( A ) defined as follows: S I M ( A ) = { ( x , y ) U × U | a A , f ( x , a ) = f ( y , a ) or f ( x , a ) = or f ( y , a ) = } .
S I M ( A ) is called a similarity relation on U, which is reflexive and symmetric [6], and S I M ( A ) = a A S I M ( { a } ) . Moreover, for any x U , the similarity class of object x is denoted as S A ( x ) = { y U | ( x , y ) S I M ( A ) } . In particular, S { a } ( x ) is shorted as S a ( x ) . The set of all the similarity classes of objects is denoted by U / S I M ( A ) = { S A ( x ) | x U } . It is worth noting that U / S I M ( A ) is usually not a partition on U, but a covering on U.
An IS is a decision system (DS) ( U , A T { d } , V , f ) , where A T is called the conditional attribute set and d is called decision attribute, which satisfies d A T . Then ( U , A T { d } , V , f ) is called a complete decision information system if f ( x , a ) for any a A T { d } . Otherwise, ( U , A T { d } ) is called a incomplete decision system (IDS).
Definition 3
([6,31]). Suppose ( U , A T , V , f ) is an IIS and A A T . If S I M ( A T ) = S I M ( A ) , i.e., S A T ( x ) = S A ( x ) for any x U , then A is called a consistent set of S and no proper subset of A is consistent, then A is called a reduct of A T .
For an IDS ( U , A T { d } , V , f ) and A A T , we denote Υ A ( x ) = { f ( y , d ) | y S A ( x ) } . If | Υ ( x ) | = 1 for any x U , namely S A T ( x ) [ x ] d for any x U , then ( U , A T { d } ) is called a consistent IDS. Otherwise it is called an inconsistent IDS.
Based on similarity relations, one can induce the upper and lower approximations of an arbitrary subset X of U. They are defined as follows. It is worth noting that Definition 3 is actually a promotion of Pawlak’s rough set model proposed in [2].
Definition 4
([6,31]). Suppose ( U , A T , V , f ) is an IIS and A A T . For any X U , the lower and upper approximations of X are defined as follows:
A ̲ ( X ) = { x U | [ S A ( x ) X } , A ¯ ( X ) = { x U | S A ( x ) X } .
Some properties of the lower and upper approximations of X are presented by the following proposition.
Proposition 1
([6,31]). Suppose ( U , A T , V , f ) is an IIS and A , B A T . For any X , Y U , the following properties hold:
(1) 
A ̲ ( X ) = A ¯ ( X ) = X , where X = U , ;
(2) 
A ̲ ( X ) X A ¯ ( X ) ;
(3) 
A ̲ ( X ) = ( A ¯ ( X C ) ) C , where X C = U X ;
(4) 
X Y , A ̲ ( X ) A ̲ ( Y ) , A ¯ ( X ) A ¯ ( Y ) ;
(5) 
A ̲ ( X Y ) = A ̲ ( X ) A ̲ ( Y ) , A ¯ ( X Y ) = A ¯ ( X ) A ¯ ( Y ) ;
(6) 
If B A , then B ̲ ( X ) A ̲ ( X ) and A ¯ ( X ) B ¯ ( X ) .

2.2. Hypergraphs

A hypergraph represents a generalization of graphs, which are mathematically defined by hyperedges that connect arbitrary sets of vertices. We now proceed to formally introduce its fundamental concepts and properties.
Definition 5
(Hypergraph [32]). Suppose V = { v 1 , v 2 , , v n } is a finite and ξ = { E 1 , E 2 , , E m } is a set of subsets of V . H = ( V , ξ ) is called a hypergraph on V if H satisfies that
(1) 
E i ( 1 i m ) ;
(2) 
i = 1 m E i = V .
Based on Definition 5, the elements of V and the elements of ξ are called vertices and edges of hypergraph H . Moreover, if E i E j E i = E j , then hypergraph H is called a simple hypergraph. In a hypergraph H , the set of all the hyperedges containing the vertex v V is denoted as D H ( v ) , and the number | D H ( v ) | is called the degree of vertex v. For any edge E i ξ , the number | E i | is called the degree of hyperedge. For any two vertices in V , they are said to be incident or neighborhood each other if there is a hyperedge E ξ that contains both vertices. All the neighborhoods for a given vertex v V is called the neighborhood of v denoted as N H ( v ) . Detailed information can be viewed in [32].
Example 1.
Let V = { v 1 , v 2 , v 3 , v 4 , v 5 , v 6 } and ξ = { E 1 , E 2 , E 3 , E 4 , E 5 } , where E 1 = { v 1 , v 5 } , E 2 = { v 1 , v 2 } , E 3 = { v 1 , v 2 , v 4 } , E 4 = { v 2 , v 3 , v 6 } , E 5 = { v 3 , v 4 , v 6 } . Then H = ( V , ξ ) is a hypergraph as shown in Figure 1.
In the following, we introduce two important concepts in hypergraphs, namely transversal and dual hypergraphs. In particular, as a promotion of coverage, the transversal of hypergraph is an important concept and it is widely used in many fields, such as machine learning.
Definition 6
(Transversal of hypergraph [32]). Suppose H = ( V , ξ ) is a hypergraph, ξ = { E 1 , E 2 , , E m } and T V , if T E i for any i { 1 , 2 , , m } , then T is called a transversal of H . If for any K T , K is not a transversal, then T is called the minimal transversal of H . The set of all the minimal transversals of H is denoted by T ( H ) .
Definition 7
(Dual of hypergraph [32]). Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . The dual of the hypergraph H is a hypergraph H = ( V , ξ ) whose vertex set V = { E 1 , E 2 , , E m } and the hyperedge set ξ = { Z 1 , Z 2 , , Z n } is defined as follows: Z i = { E j | v i E j H } .
The following example illustrates the dual hypergraph of a hypergraph.
Example 2.
Continued to Example 1, the dual of H = ( V , ξ ) is shown in Figure 2.

3. Matrix-Based Method for Hypergraphs

In this section, for a hypergraph H , two types of characteristic matrices of hypergraphs, which are denoted as OM H and SM H respectively, are constructed by the relationship between hypergraph edges and vertices. Moreover, the relationship between OM H and SM H is examined. Based on the constructed characteristic matrices of hypergraphs, some properties of hypergraphs, such as transversal, are characterized by the matrix method.
For the convenience of explanation, the matrix operations used in this article are introduced as follows.
Suppose A = ( a i j ) n × m and B = ( b i j ) m × s are two characteristic matrices. Three matrix operations are shown as follows:
A B = C = ( c i j ) n × s , where c i j = k = 1 m ( a i k b k j ) ,   A B = C = ( c i j ) n × s , where c i j = k = 1 m ( a i k b k j ) .
When m = s , A B = C = ( c i j ) n × m , where c i j = a i j b i j .
In the following, a type of characteristic vector is introduced to represent the one-to-one correspondence between a set and a vector.
Definition 8.
Suppose U = { x 1 , x 2 , , x n } . For any X U , define the characteristic vector χ X as follows:
χ X = [ u 1 , u 2 , , u n ] T , w h e r e u i = 1 , x i X ; 0 , x i X .
Hypergraphs have a wide range of applications in machine learning and neural networks, and the matrix representation of hypergraphs is an important research topic. Firstly, the type-one characteristic matrix of a hypergraph is established by the relationship between edges and vertices.
Definition 9.
Let H = ( V , ξ ) be a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . The type-one characteristic matrix of H , denoted as OM H = ( c i j ) n × m , is defined as follows:
c i j = 1 , v i E j , 0 , v i E j .
It is easy to see that OM H = ( c i j ) n × m satisfies the following properties:
(1)
For any vertex v i V ( 1 i n ) , the degree of vertex v i is j = 1 m c i j , i.e., | D H ( v ) | = j = 1 m c i j .
(2)
For any hyperedge E j ξ ( 1 j m ) , the degree of hyperedge E i ξ is i = 1 n c i j , i.e., | E j | = i = 1 n c i j .
When constructing the type-one characteristic matrix of a hypergraph, if the order of hypergraph edges is changed, then the corresponding characteristic matrix may be different. However, it is interesting that the result of their transposition with themselves is unique under the matrix operation ⨀.
Proposition 2.
Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } , and OM H 1 , OM H 2 are the type-one characteristic matrices induced by H . Then OM H 1 ( OM H 1 ) T = OM H 2 ( OM H 2 ) T .
Proof. 
Since OM H 1 can get OM H 2 through a certain number of column exchanges, let OM H 1 = a 1 , a 2 , , a i , , a j , , a n , where a k is n-dimensional column vector ( 1 k n ) and OM H 2 = a 1 , a 2 , , a j , , a i , , a n , where a k is n-dimensional column vector ( 1 k n ) . Then we have OM H 1 = a 1 T , a 2 T , , a i T , , a j T , , a n T T . In fact, it obvious that OM H 1 ( OM H 1 ) T = k = 1 m ( a k a k T ) = OM H 2 ( OM H 2 ) T .    □
The following example is presented to illustrate Proposition 2.
Example 3.
Continued from Example 1, OM H 1 , OM H 2 are the type-one characteristic matrix induced by H , which are shown as follows:   
OM H 1 = E 1 E 2 E 3 E 4 E 5 v 1 v 2 v 3 v 4 v 5 v 6 1 0 0 0 1 0 1 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 OM H 2 = E 5 E 4 E 1 E 2 E 3 v 1 v 2 v 3 v 4 v 5 v 6 0 0 1 1 0 1 0 1 1 0 0 1 1 0 0 0 1 0 1 1 0 0 0 0 1 1 0 1 0 0 .
It can be obtained that
OM H 1 ( OM H 1 ) T = OM H 2 ( OM H 2 ) T = 1 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 0 0 0 1 0 0 1 1 1 0 1 .
The type-one characteristic matrix induced by H defined on Definition 9 is constructed by the relationship between vertices and edges. In fact, another characteristic matrix of hypergraph can be established based on the relationship among vertices in hypergraph.
Definition 10.
Given a hypergraph H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . Denote the relationship matrix between vertex and vertex SM H = ( c i j ) n × n as follows: c i j = 1 , { v i , v j } o r { v i } E ξ ( 1 i j n ) , 0 , otherwise .   SM H = ( c i j ) n × n is called the type-two characteristic matrix induced by hypergraph H .
Obviously, it is easy to see that SM H = ( c i j ) n × n has the following characteristics:
(1)
SM H = ( c i j ) n × n is a symmetric matrix.
(2)
For any vertex v i V ( 1 i n ) , the neighborhood of v i is j = 1 n c i j or j = 1 n c j i , namely N H ( v i ) = j = 1 n c i j or N H ( v i ) = j = 1 n c j i .
Example 4.
Continued from Example 1, the type-one characteristic matrix OM H and the type-two characteristic matrix SM H induced by hypergraph H are shown as follows:
OM H = E 1 E 2 E 3 E 4 E 5 x 1 x 2 x 3 x 4 x 5 x 6 1 0 0 0 1 0 1 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 SM H = x 1 x 2 x 3 x 4 x 5 x 6 x 1 x 2 x 3 x 4 x 5 x 6 1 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 0 0 0 1 0 0 1 1 1 0 1
Base on Definitions 9 and 10, the two types of matrix representations of hypergraphs are introduced above. However, what interests us is the relationship between them? The following theorem present the relationship between OM H and SM H through the matrix operation ⨀.
Theorem 1.
Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . Then SM H = OM H ( OM H ) T , where OM H T is the the transpose of OM H .
Proof. 
Denote SM H = ( a i j ) n × n and OM H = ( b i j ) n × m = a 1 , a 2 , . , a n T . We have a i j = 1 there exist E ξ , such that { v i , v j } E a i a j T = 1 b i j = 1 . Thus, SM H = OM H ( OM H ) T .    □
According to Theorem 1, SM H is decomposed into the ⨀ product of the transpose of OM H and OM H . Here is an example to illustrate their relationship.
Example 5.
Continued to Example 1, the type-one characteristic matrix OM H and the type-two characteristic matrix SM H induced by hypergraph H are shown as follows:
OM H ( OM H ) T = 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 = 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 1 1 1 0 1 = SM H
The following result presents the relationship between SM H and the characteristic vectors induced by hypergraph edges of hypergraph H .
Proposition 3.
Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . Then SM H = k = 1 m ( χ E i χ E i T ) .
Proof. 
Denote SM H = ( a i j ) n × n , according to Definitions 9 and 10, we have
(1)
a i j = 1 ( i j ) there exists E k ( 1 k m ) such that { x i , x j } E k b i j = 1 , where χ E k χ E k T = ( b i j ) n × n .
(2)
a i j = 1 ( i = j ) there exists E k ( 1 k m ) such that { x i } = E k b i j = 1 , where χ E k χ E k T = ( b i j ) n × n .
To sum up, we have SM H = k = 1 m ( χ E i χ E i T ) .    □
For a hypergraph and its dual hypergraph, their corresponding characteristic matrix exists in a transposition relationship, that is, the characteristic matrix of one is the transpose of the other characteristic matrix. Based on the above characteristic matrices of hypergraph, some features of hypergraphs are shown from the viewpoint of the matrix as follows.
Theorem 2.
Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } and H = ( V , ξ ) is the dual of H . Then OM H = ( OM H ) T .
Proof. 
According to Definitions 7 and 9, the result is straightforward.    □
From the perspective of the set, the transversal of a hypergraph means that it intersects with all the edges of the hypergraph and is not empty. From the perspective of the matrix, the transversal representation of the hypergraph is more intuitive.
Theorem 3.
Suppose H = ( V , ξ ) is a hypergraph with V = { v 1 , v 2 , , v n } and ξ = { E 1 , E 2 , , E m } . Then X V is a transversal of H if and only if ( OM H ) T χ X = 1 , where 1 is a column vector with all 1 elements.
Proof. 
Denote χ X = u 1 , u 2 , , u n T and ( OM H ) T = ( a i j ) n × m = a 1 a 2 . . . a n . According to Definition 6, we have X V is a transversal of H X E i k = 1 n ( a i k u k ) = 1 ( OM H ) T χ X = 1 .    □

4. Applying the Characteristic Matrix of a Hypergraph to Attribute Reduction in IISs

In this section, the characteristic matrices of hypergraphs proposed in Section 3 are employed to the attribute reduction of IISs, and a hypergraph-based method is proposed for judging the attribute reduction in an IIS through a matrix. Moreover, we establish a new discernibility matrix for calculating the reductions of an IIS based on the constructed characteristic matrices of hypergraphs. Furthermore, we investigative the attribute reduction in an IDS, and give a hypergraph-based method for the attribute reduction of consistent or inconsistent decision information systems via the matrix.

4.1. Hypergraph-Based Method for Attribute Reduction of IISs

In this subsection, a family of hypergraphs can be induced by an IIS; in this way the relationship between matrices and IISs is established. Moreover, we propose the hypergraph-based method for the attribute reduction in an IIS through a matrix.
In an IIS S = ( U , A T ) with A T = { a 1 , a 2 , , a m } , we calculate U / S I M ( { a i } ) = { S a i ( x ) | x U } for any attribute a i ( 1 i m ) . In the above way, a hypergraph is derived from a i , denoted H a i and is presented as follows:
Definition 11.
Given an IIS S = ( U , A T ) with U = { x 1 , x 2 , , x n } and A T = { a 1 , a 2 , , a m } . Denote
H a = ( U , U / S I M ( { a } ) { U } ) , 2 | U / S I M ( { a i } ) | and U U / S I M ( { a i } ) , ( U , U / S I M ( { a } ) ) , otherwise .
Moreover, the family of hypergraphs derived from A T is denoted as H A T = { H a | a A T } .
The similarity relation is an important index to describe the similarity between objects in generalized rough sets. Below, a type of relation matrix of an IIS is established based on the similarity relation.
Definition 12.
Consider an IIS ( U , A T , V , f ) with U = { x 1 , x 2 , , x n } and A T = { a 1 , a 2 , , a m } . For any A A T , the relationship matrix between vertices and vertices M A = ( c i j ) n × n is denoted as follows:
c i j = 1 , ( x i , x j ) S I M ( A ) , 0 , otherwise .
M R A = [ c i j ] n × n is called the relation matrix induced by A.
For an IIS S = ( U , A T , V , f ) , the relationship between M R a and hypergraph H a derived from a A T is examined in the following proposition. In fact, M R a is the type-two characteristic matrix induced by H a pointed out in Section 3.
Proposition 4.
Suppose S = ( U , A T , V , f ) is an IIS with A T = { a 1 , a 2 , , a m } . Then M R a i = SM H a i = OM H a i ( OM H a i ) T ( 1 i m ) .
Proof. 
Let U = { x 1 , x 2 , , x n } , M R a i = ( c i j ) n × n and SM H a i = ( b i j ) n × n , where H a i = ( U , ξ i ) for any a i A T . According to Definitions 11 and 12, we have c i j = 1 ( x i , x j ) S I M ( { a i } ) there exist E ξ i such that { x i , x j } E . According to Definition 10, there exist E ξ i such that { x i , x j } E b i j = 1 . Hence M R a i = SM H a i . According to Theorem 1, it is easy to see that SM H a i = OM H a i ( OM H a i ) T . To sum up, M R a i = SM H a i = OM H a i ( OM H a i ) T .    □
For an IIS S = ( U , A T , V , f ) , combining with Proposition 4, the following theorem presents the relationship between M R A T and the family of hypergraphs derived from A T through matrix operators.
Theorem 4.
Suppose S = ( U , A T , V , f ) an IIS with U = { x 1 , x 2 , , x n } and A T = { a 1 , a 2 , , a m } . Then M R A T = k = 1 m ( OM H a k ( OM H a k ) T ).
Proof. 
According to Proposition 4, we only need to prove M R A T = k = 1 m ( SM H a k ) . Denote M R A T = ( c i j ) n × n and SM H a k = ( a i j k ) n × n ( 1 k m ) . We have c i j = 1 ( x i , x j ) S I M ( A T ) ( x i , x j ) k = 1 m S I M ( { a k } ) a k A T , ( x i , x j ) S I M ( { a k } ) a k A T , a i j k = 1 k = 1 m a i j k = 1 . Thus, M R A T = k = 1 m ( SM H a k ) = k = 1 m ( OM H a k ( OM H a k ) T ).    □
The following example illustrates the calculation method of Theorem 4.
Example 6.
For the IIS K 1 depicted in Table 1, U = { x 1 , x 2 , , x 6 } and A T = { I , S , G } . We have the following:
  • H I = ( U , ξ I ) , where ξ I = { { x 1 , x 4 , x 5 , x 6 } , { x 2 , x 3 , x 4 } } .
  • H S = ( U , ξ T ) , where ξ S = { { x 1 , x 2 , x 3 , x 4 , x 5 } , { x 2 , x 3 , x 6 } } .
  • H G = ( U , ξ G ) , where ξ G = { { x 1 , x 2 , x 3 } , { x 1 , x 4 , x 5 } , { x 1 , x 6 } } .
Then
OM H I = 1 0 0 1 0 1 1 1 1 0 1 0 , SM H I = OM H I ( OM H I ) T = 1 0 0 1 0 1 1 1 1 0 1 0 1 0 0 1 1 1 0 1 1 1 0 1 = 1 0 0 1 1 1 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 .
OM H S = 1 0 0 1 0 1 1 1 1 0 1 0 , SM H S = OM H S ( OM H S ) T = 1 0 1 1 1 1 1 0 1 0 0 1 1 1 1 1 1 0 0 1 1 0 0 1 = 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 0 0 1 1 0 0 1 .
OM H G = 1 1 1 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 , SM H G = OM H G ( OM H G ) T = 1 1 1 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 0 0 0 1 = 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 0 0 1 .
We can conclude that
M R A T = SM H I SM H S SM H G = 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 .
In fact, for an IIS S = ( U , A T , V , f ) , Theorem 4 tells us that M R A T can be decomposed into these matrices induced by each attribute through matrix operation ⨁. In the following, by the matrix operation ⨀, the inclusion relation between object subsets can be indicated by characteristic vectors.
Proposition 5.
Suppose S = ( U , A T , V , f ) is an IIS and X , Y U . Then X Y if and only if χ X T ( χ Y ) = 0 .
Proof. 
Denote the i-th element of the vector χ X T by χ X T ( i ) and the i-th element of the vector χ Y by χ Y ( i ) ( 1 i | U | ) . We have X Y for any 1 i | U | , if χ X T ( i ) = 1 , then χ Y ( i ) = 1 , which means χ Y ( i ) = 0 . Hence k = 1 | U | ( χ X T ( i ) χ Y ( i ) ) = 0 , i.e., χ X T ( χ Y ) = 0 .    □
From the viewpoint of matrix operation ⨁, the inclusion relation between object sets can also be presented.
Proposition 6.
Suppose S = ( U , A T , V , f ) is an IIS and X , Y U . Then X Y if and only if ( χ X T ) χ Y = 1 .
Proof. 
Denote the i-th element of the vector χ X T by χ X T ( i ) and the i-th element of the vector χ Y by χ Y ( i ) ( 1 i | U | ) . We have X Y for any 1 i | U | , if χ Y ( i ) = 0 , then χ X T ( i ) = 0 which means χ X ( i ) = 1 k = 1 | U | ( ( χ X ( i ) ) χ Y ( i ) ) = 1 , i.e., ( χ X T ) χ Y = 1 .    □
Combining with Definition 4, Theorem 4, for any X U , the matrix representation of A ̲ ( X ) is obtained by the relation matrix induced by attribute set A T based on Propositions 5 and 6.
Proposition 7.
Suppose S = ( U , A T , V , f ) is an IIS and X U . Then χ A ̲ ( X ) = M R A T ( χ X ) .
Proof. 
Denote the i-th element of the vector χ A ̲ ( X ) by χ A ̲ ( X ) ( i ) and M R A T = a 1 a 2 . . . a | U | . According to Proposition 5, we have χ A ̲ ( X ) ( i ) = 0 χ A ̲ ( X ) ( i ) = 1 S A T ( x i ) X a i ( χ X ) = 0 ( 1 i | U | ) ⇔ the i-th element of M R A T ( χ X ) is 0. Hence it is obtained that χ A ̲ ( X ) = M R A T ( χ X ) .    □
Similarly to Proposition 7, the matrix representation of A ¯ ( X ) can be obtained as follows.
Proposition 8.
Suppose S = ( U , A T , V , f ) is an IIS and X U . Then χ A ¯ ( X ) = M R A T ( χ X ) .
Proof. 
Denote the i-th element of the vector χ A ¯ ( X ) by χ A ¯ ( X ) ( i ) and M R A T = a 1 a 2 . . . a | U | . We have χ A ¯ ( X ) ( i ) = 1 ( 1 i | U | ) S A T ( x i ) X i ( 1 i | U | ) , χ X ( i ) χ S A T ( x i ) ( i ) ) = 1 i ( 1 i | U | ) χ X ( i ) a i = 1 k = 1 | U | ( χ X ( k ) χ Y ( k ) ) = 1 M R A T ( χ X ) ( i ) = 1 . Therefore, χ A ¯ ( X ) = M R A T ( χ X ) .    □
According to Propositions 7 and 8, we can obtain an algorithm based on the the constructed matrices to calculate the upper and lower approximations of any subset of objects.
The construction of the characteristic matrix OM H a ( a A T ) and the characteristic vector χ X in Step 1 can be performed in O ( | U | 2 | A T | ) . Step 2 is to calculate the type-two characteristic matrix SM H a and the relation matrix M R A T induced by A T and its time complexity is O ( | U | 2 | A T | ) . Since the size of relation characteristic matrix SM H a is not more than | U | × | U | and the size of the characteristic vector χ X is | U | , the worst time complexity of step 3 is O ( | U | 3 ) . Therefore, the worst time complexity of Algorithm 1 is O ( | U | 2 | A T | + O ( | U | 3 ) ) .
Algorithm 1 A matrix-based algorithm to calculate the approximations of a subset of objects
Input : An IIS S = ( U , A T , V , f ) and a subset of object X.  
     Output : The lower and upper approximations of X.
       1: Compute the characteristic matrix OM H a for any a A T and the characteristic vector χ X for X;
       2: Compute SM H a for any A T and M R A T according to Proposition 4 and Theorem 4; 
       3: Compute M R A T χ X and ( M R A T ( χ X ) ) according to Propositions 7 and 8;  
       4: Return A ¯ ( X ) and A ̲ ( X )
Example 7.
Continued from Example 6, for any X U . The matrix representations of A ¯ ( X ) , A ̲ ( X ) are presented in Table 2 and Table 3.
The concept of consistent set is important for attribution reduction in information systems. In the following, a consistent set of A T can be shown from the viewpoint of a matrix. Moreover, a hypergraph-based method for the attribute reduction in an IIS is obtained via the matrix.
Theorem 5.
Suppose S = ( U , A T , V , f ) is an IIS and A A T . Then the following hold:
(1) 
A is a consistent set of A T if and only if M R A T = a A ( SM H a ) .
(2) 
A is a ruduct set of A T if and only if M R A T = a A ( SM H a ) and M R A T a B ( SM H a ) for any B A .
Proof. 
(1) According to Definitions 3 and 12, A is a consistent set of A T S I M ( A T ) = S I M ( A ) M R A T = M R A . According to Theorem 4, we have M R A T = M R A M R A T = M R A = a A ( SM H a ) .
(2) According to Definition 3 and combining with (1), it is obviously.    □
Based on Theorem 5, a new discernibility matrix is defined to calculate the reducts in IISs.
Definition 13.
Suppose S = ( U , A T , V , f ) is an IIS with U = { x 1 , x 2 , , x n } , A T = { a 1 , a 2 , , a m } and M ( A T ) = { SM H a 1 = ( a i j ( 1 ) ) n × n , SM H a 2 = ( a i j ( 2 ) ) n × n , , SM H a m = ( a i j ( m ) ) n × n } is the family of the type-two characteristic matrix induced by A T . M ( S ) = ( c i j ) n × n is called the discernibility matrix of S and defined as c i j = { a k | a i j ( k ) = 0 } for any 1 i , j n , 1 k m .
By the following theorem, we establish a new judgment method for attribute reduction in IISs based on the constructed discernibility matrix in Definition 13.
Theorem 6.
Suppose S = ( U , A T , V , f ) an IIS and M ( S ) = ( c i j ) n × n is the discernibility matrix of S . Then for any A A T , the following hold:
(1) 
A is a consistent set of A T if and only if for any c i j , A c i j .
(2) 
A is a reduct set of A T if and only if A is a minimal set satisfying A c i j for any c i j .
Proof. 
Denote A T = { a 1 , a 2 , , a m } and SM H a k = ( a i j k ) n × n ( 1 k m ) .
(1) Necessity. Suppose A is a consistent set of A T and c i j . According to Definition 13, if A c i j = , then a i j ( k ) = 1 for any a k A . Since c i j , there exist a k A T such that a i j ( k ) = 0 , which implies that M R A T a A ( SM H a ) . According to Theorem 5, this contradicts that A is a consistent set of A T .
Sufficiency. Suppose A c i j for any c i j . Let M R A T = ( a i j ) n × n and M R A = a A ( SM H a ) = ( b i j ) n × n . If M R A T M R A , then there is at least one a i j b i j . There are two situations below:
Case 1.
a i j = 0 and b i j = 1 . According to Theorem 4, there exists at least one a k A T such that a i j ( k ) = 0 , which implies that c i j , since A c i j , according to Definition 13, we have b i j = 0 , which contradicts the fact b i j = 1 .
Case 2.
a i j = 1 and b i j = 0 . This situation does not exist. In fact, if b i j = 0 , according to Theorem 4, it is easy to see that a i j = 0 , which contradicts a i j = 1 .
To sum up, it is obtained that M R A T M R A , which implies that A is a consistent set of A T .
(2) According to Definition 3 and combining with (1), it is obvious.    □
According to Definition 13 and Theorem 8, an algorithm based on hypergraph can be obtained to calculate the reductions of an IIS.
Below we give an example to illustrate the process of calculating attribute reduction in an IIS by Algorithm 2.
Algorithm 2 A hypergraph-based method for the attribute reduction in an IIS via matrix
Input : An IIS S = ( U , A T , V , f ) .
       Output : The attribution reduction of an I I S
        1: Compute the characteristic matrix SM H a for any a A T according to Algorithm 1;  
        2: Compute the discernibility matrix M ( S ) of S according to Definition  13 and Theorem 5;
        3: Compute the consistent set and the reduct set of S according to Theorem 6;
        4: Return all the reductions of S .
Example 8.
For the IIS K 2 depicted in Table 4, U = { x 1 , x 2 , , x 6 } and A T = { P , L , S , X } . We have
  • H P = ( U , ξ P ) , where ξ P = { { x 1 , x 3 , x 4 } , { x 2 , x 5 , x 6 } } .
  • H L = ( U , ξ L ) , where ξ L = { { x 1 , x 2 , x 3 } , { x 2 , x 3 , x 4 , x 5 , x 6 } } .
  • H S = ( U , ξ S ) , where ξ S = { { x 1 , x 2 , x 5 , x 6 } , { x 3 , x 4 } } .
  • H X = ( U , ξ X ) , where ξ X = { { x 1 , x 3 , x 4 , x 5 } , { x 2 , x 3 , x 6 } } .
Then it can be calculated that
SM H P = 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 1 1 , SM H L = 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 .
SM H S = 1 1 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 1 1 , SM H X = 1 0 1 1 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 0 0 1 1 0 0 1 .
According to Definition 13, the discernibility matrix of K 2 is presented as
M ( K 2 ) = { P , X } { S } { L , S } { P , L } { P , L , X } { P , X } { P , S } { P , S , X } { X } { S } { P , S } { S } { S } { L , S } { P , S , X } { P , S } { P , S , X } { P , L } { X } { S } { P , S } { X } { P , L , X } { S } { P , S , X } { X } .
Hence { P , S , X } and { S , L , X } are two reductions of K 2 .

4.2. Hypergraph-Based Method for Attribute Reduction of Consistent and Inconsistent IDSs

In the previous section, we have given a hypergraph-based attribute reduction method for an IIS. In this part, we discuss the problem of attribute reduction in consistent and inconsistent IDSs from the perspective of matrix representation. Furthermore, a hypergraph-based method for attribute reduction in a consistent and inconsistent IDS is given through matrices.
Definition 14
([6]). Suppose S = ( U , A T { d } , V , f ) is a consistent IDS and A A T . If S I M ( A ) I N D ( d ) , i.e., S A ( x ) [ x ] d for any x U , then A is called a relative consistent set of S and no proper subset of A is consistent, then A is called a relative reduct of S .
Suppose S = ( U , A T { d } , V , f ) is a consistent IDS with U / I N D ( d ) = { D 1 , D 2 , , D r } . Denote H ( d ) = ( χ D 1 , χ D 2 , , χ D r ) . Based on Proposition 7 and 8, the matrix representation of the relative consistent set of S can be obtained as follows.
Theorem 7.
Suppose S = ( U , A T { d } , V , f ) is a consistent IDS and A A T . Then the following hold:
(1)
A is a relative consistent set of S if and only if ( M R A ( H ( d ) ) ) = H ( d ) .
(2)
A is a relative consistent set of S if and only if M R A H ( d ) = H ( d ) .
Proof. 
(1) Necessity. Suppose A is a relative consistent set of S . According to Definition 14 and Proposition 1, it is obvious that A ̲ ( D ) D for any D U / I N D ( d ) . Let [ x ] d = D for any x U , since S is a consistent IDS, so S A ( x ) [ x ] d = D , which implies that x A ̲ ( D ) . We have D A ̲ ( D ) . Hence A ̲ ( D ) = D and χ A ̲ ( D ) = χ D for any D U / I N D ( d ) . According to Proposition 7, for any D U / I N D ( d ) , we have ( M R A ( χ D ) ) = χ D , which implies that ( M R A ( H ( d ) ) ) = H ( d ) .
Sufficiency. For any D U / I N D ( d ) , ( M R A ( H ( d ) ) ) = H ( d ) χ A ̲ ( D ) = χ D A ̲ ( D ) = D , which implies A is a relative consistent set of S .
(2) Necessity. Suppose A is a relative consistent set of S . According to Definition 14 and Proposition 1, we have A ¯ ( D ) = x D S A ( x ) = D for any D U / I N D ( d ) , namely χ A ¯ ( D ) = χ D for any D U / I N D ( d ) . According to Proposition 8, for any D U / I N D ( d ) it is obvious that M R A T χ D = χ D , which implies that M R A H ( d ) = H ( d ) .
Sufficiency. For any D U / I N D ( d ) , M R A H ( d ) = H ( d ) M R A T χ D = χ D χ A ¯ ( D ) = χ D A ¯ ( D ) = D , which implies A is a relative consistent set of S . □
Combining Definition 14 and Theorem 7, we obtain the matrix-based method for the attribute reductions in consistent IDSs.
Theorem 8.
Suppose S = ( U , A T { d } , V , f ) is a consistent IDS and C A T . Then the following hold:
(1) 
A is a relative reduct of S if and only if A is a minimal set satisfying ( M R A ( H ( d ) ) ) = H ( d ) .
(2) 
A is a relative consistent set of S if and only if A is a minimal set satisfying M R A H ( d ) = H ( d ) .
Proof. 
By Theorem 7, it has been proved that A is a relative consistent set of S if and only if ( M R A ( H ( d ) ) ) = H ( d ) and A is a relative consistent set of S if and only if M R A H ( d ) = H ( d ) . According to Definition 14, for any B B , B is not a relative consistent set of S . In other words, A is a minimal set satisfying M R A H ( d ) = H ( d ) or ( M R A ( H ( d ) ) ) = H ( d ) . □
Theorem 8 has discussed a hypergraph-based method for the attribute reduction in consistent IDSs via matrices. In the following, the attribute reduction in an inconsistent IDS is given in a similar way.
Definition 15.
Suppose S = ( U , A T { d } , V , f ) is an inconsistent IDS and A A T . If Υ A ( x ) = Υ A T ( x ) for any x U , then A is called a relative consistent set of S and no proper subset of A is consistent, then A is called a relative reduct of S .
Lemma 1
([6]). Suppose S = ( U , A T { d } ) is an inconsistent IDS and A A T . Then A is a relative consistent set of S if and only if A ¯ ( D ) = A T ¯ ( D ) for any D U / I N D ( d ) .
Based on Lemma 1 and Proposition 8, for an inconsistent IDS, the judgment method for the relative consistent set of inconsistent IDS is presented in viewpoint of matrix.
Theorem 9.
Suppose S = ( U , A T { d } , V , f ) is an inconsistent IDS and A A T . Then A is a relative consistent set of S if and only if M R A χ D = M R A T χ D for any D U / I N D ( d ) .
Proof. 
According to Lemma 1 and Proposition 8, we have χ A ¯ ( D ) = M R A χ D and χ A T ¯ ( D ) = M R A T χ D for any D U / I N D ( d ) . Therefore, A is a relative consistent set of S M R A χ D = M R A T χ D for any D U / I N D ( d ) . □
Furthermore, a hypergraph-based method for the attribute reduction on inconsistent IDSs is obtained by a matrix as follows.
Theorem 10.
Suppose S = ( U , A T { d } , V , f ) is an inconsistent IDS and A A T . Then A is a relative reduct of S if and only if A is a minimal set satisfying M R A H ( d ) = M R A T H ( d ) .
Proof. 
According to Definition 15 and Theorem 9, the following hold:
  • A is a relative reduct of S ;
  • A is a relative consistent set of S and B is not a relative consistent set of S for any B A ;
  • M R A χ D = M R A T χ D for any D U / I N D ( d ) and M R B χ D M R A T χ D for any D U / I N D ( d ) and any B A ;
  • M R A H ( d ) = M R A T H ( d ) and M R B H ( d ) M R A T H ( d ) for any D U / I N D ( d ) and any B A ;
  • A is a minimal set satisfying M R A H ( d ) = M R A T H ( d ) .
In the following, we give an example to illustrate the process of calculating attribute reduction in inconsistent IDSs.
Example 9.
For the inconsistent IDS K 4 = ( U , A T { d } , V , f ) depicted in Table 5, where U = { x 1 , x 2 , , x 6 } , A T = { P , L , S , X } and d = { I } . We have
  • H P = ( U , ξ P ) , where ξ P = { { x 1 , x 3 , x 4 , x 5 , x 6 } , { x 2 , x 4 } } .
  • H L = ( U , ξ L ) , where ξ L = { { x 1 , x 6 } , { x 2 , x 3 , x 4 , x 5 } } .
  • H S = ( U , ξ S ) , where ξ S = { { x 1 , x 6 } , { x 2 , x 3 , x 4 , x 5 } } .
  • H X = ( U , ξ X ) , where ξ X = { { x 1 , x 3 , x 5 , x 6 } , { x 2 , x 4 } } .
  • H X = ( U , ξ d ) , where ξ d = { { x 1 , x 5 } , { x 2 , x 3 , x 4 } , { x 6 } } .
According to Theorem 5, we can calculate that
M R { X , L } H ( d ) = M R A T H ( d ) = 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 = 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 1 1 0 1 0 0 0 1 1 .
M R { X , S } H ( d ) = M R A T H ( d ) = 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 = 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 1 1 0 1 0 0 0 1 1 .
Moreover,
  • M R { X } H ( d ) M R A T H ( d ) .
  • M R { L } H ( d ) M R A T H ( d ) .
  • M R { P } H ( d ) M R A T H ( d ) .
Through verification, we can conclude that { X , L } and { X , V } are two reductions of K 4 .

5. Conclusions and Further Study

In this paper, two types of characteristic matrices of a hypergraph were presented and the characteristic matrix decomposition relationship between them is examined. Based on the constructed characteristic matrices of hypergraphs, the hypergraph-based method for the attribute reduction in an IIS was obtained, and a novel discernibility matrix was proposed to distinguish attribute reduction in an IIS. Furthermore, from the perspective of matrices, the judgment method of identifying the reduction of a consistent and inconsistent IDS was given. It is worth noting that characteristic matrix factorization is a special case of matrix factorization, which plays an important role in machine learning, deep learning and so on. Our future work is as follows:
(1) It is necessary to design an effective algorithm for the method proposed in this article, and promote its application. (2) For IISs and IDSs, further research is needed for attribute reduction of different requirements, and the proposed methods should be improved if necessary. (3) It is necessary to expand the method proposed in this paper to solve more extensive and complex information systems, such as the fuzzy decision system and dynamic information system.

Author Contributions

Conceptualization, L.S. and C.J.; methodology, L.S. and C.J.; software, L.S.; validation, L.S.; formal analysis, L.S.; investigation, L.S.; resources, C.J.; data curation, L.S.; writing—original draft preparation, L.S.; writing—review and editing, L.S. and C.J.; visualization, C.J.; supervision, L.S.; project administration, L.S. and C.J.; funding acquisition, L.S. and C.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Fujian Provincial Natural Science Foundation of China (2024J01157).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yao, Y.; Zhao, Y. Attribute reduction in decision-theoretic rough set models. Inf. Sci. 2008, 178, 3356–3373. [Google Scholar] [CrossRef]
  2. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  3. Wang, C.; He, Q.; Chen, D.; Hu, Q. A novel method for attribute reduction of covering decision systems. Inf. Sci. 2014, 254, 181–196. [Google Scholar] [CrossRef]
  4. Chen, J.; Li, J. An application of rough sets to graph theory. Inf. Sci. 2012, 201, 114–127. [Google Scholar] [CrossRef]
  5. Su, L.; Zhu, W. Dependence space of topology and its application to attribute reduction. Int. J. Mach. Learn. Cybern. 2018, 9, 691–698. [Google Scholar] [CrossRef]
  6. Kryszkiewicz, M. Rough set approach to incomplete information systems. Inf. Sci. 1998, 112, 39–49. [Google Scholar] [CrossRef]
  7. Stefanowski, J.; Tsoukiàs, A. Incomplete information tables and rough classification. Comput. Intell. 2001, 17, 545–566. [Google Scholar] [CrossRef]
  8. Qian, Y.; Liang, J.; Pedrycz, W.; Dang, C. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
  9. Wang, J.; Zhang, X. Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets. Axioms 2024, 13, 411. [Google Scholar] [CrossRef]
  10. Meng, Z.; Shi, Z. Extended rough set-based attribute reduction in inconsistent incomplete decision systems. Inf. Sci. 2012, 204, 44–69. [Google Scholar] [CrossRef]
  11. Wu, S.; Wang, L.; Ge, S.; Xiong, Z.; Liu, J. Feature selection algorithm using neighborhood equivalence tolerance relation for incomplete decision systems. Appl. Soft Comput. 2024, 157, 111463. [Google Scholar] [CrossRef]
  12. Kryszkiewicz, M. Rules in incomplete information systems. Inf. Sci. 1999, 113, 271–292. [Google Scholar] [CrossRef]
  13. Leung, Y.; Li, D. Maximal consistent block technique for rule acquisition in incomplete information systems. Inf. Sci. 2003, 153, 85–106. [Google Scholar] [CrossRef]
  14. Leung, Y.; Wu, W.; Zhang, W. Knowledge acquisition in incomplete information systems: A rough set approach. Eur. J. Oper. Res. 2006, 168, 164–180. [Google Scholar] [CrossRef]
  15. Qian, Y.; Liang, J.; Wang, F. A new method for measuring the uncertainty in incomplete information systems. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2009, 17, 855–880. [Google Scholar] [CrossRef]
  16. Wu, W. Attribute reduction based on evidence theory in incomplete decision systems. Inf. Sci. 2008, 178, 1355–1371. [Google Scholar] [CrossRef]
  17. Yang, X.; Zhang, M.; Dou, H.; Yang, J. Neighborhood systems-based rough sets in incomplete information system. Knowl.-Based Syst. 2011, 6, 858–867. [Google Scholar] [CrossRef]
  18. Shu, W.; Qian, W. A fast approach to attribute reduction from perspective of attribute measures in incomplete decision systems. Knowl.-Based Syst. 2014, 72, 60–71. [Google Scholar] [CrossRef]
  19. Jiang, T.; Zhang, Y. Matrix-based local multigranulation reduction for covering decision information systems. Int. J. Approx. Reason. 2025, 181, 109415. [Google Scholar] [CrossRef]
  20. Wang, S.; Zhu, Q.; Zhu, W.; Min, F. Graph and matrix approaches to rough sets through matroids. Inf. Sci. 2014, 288, 1–11. [Google Scholar] [CrossRef]
  21. Chen, J.; Lin, Y.; Lin, G.; Li, J.; Ma, Z. The relationship between attribute reducts in rough sets and minimal vertex covers of graphs. Inf. Sci. 2015, 325, 87–97. [Google Scholar] [CrossRef]
  22. Chen, J.; Lin, Y.; Lin, G.; Li, J.; Zhang, Y. Attribute reduction of covering decision systems by hypergraph model. Knowl.-Based Syst. 2017, 118, 93–104. [Google Scholar] [CrossRef]
  23. Tan, A.; Li, J.J.; Lin, Y.J.; Lin, G.P. Matrix-based set approximations and reductions in covering decision information systems. Int. J. Approx. Reason. 2015, 59, 68–80. [Google Scholar] [CrossRef]
  24. Yang, T.; Liang, J.; Pang, Y.; Xie, P.; Qian, Y.; Wang, R. An efficient feature selection algorithm based on the description vector and hypergraph. Inf. Sci. 2023, 1629, 746–759. [Google Scholar] [CrossRef]
  25. Mao, H.; Wang, S.; Wang, L. Hypergraph-based attribute reduction of formal contexts in rough sets. Expert Syst. Appl. 2023, 234, 121062. [Google Scholar] [CrossRef]
  26. Li, X.; Pang, Y. Deterministic column-based matrix decomposition. IEEE Trans. Knowl. Data Eng. 2010, 22, 145–149. [Google Scholar] [CrossRef]
  27. Hu, C.; Zhang, L.; Huang, X.; Wang, H. Matrix-based approaches for updating three-way regions in incomplete information systems with the variation of attributes. Inf. Sci. 2023, 639, 119013. [Google Scholar] [CrossRef]
  28. Chen, D.; Wang, C.; Hu, Q. A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets. Inf. Sci. 2007, 177, 3500–3518. [Google Scholar]
  29. Chen, X.; Gong, Z.; Wei, G. On incomplete matrix information completion methods and opinion evolution: Matrix factorization towards adjacency preferences. Eng. Appl. Artif. Intell. 2024, 133, 108140. [Google Scholar] [CrossRef]
  30. Luo, Y.; Chen, H.; Yin, T.; Horng, S.; Li, T. Dual hypergraphs with feature weighted and latent space learning for the diagnosis of Alzheimer s disease. Inf. Fusion 2024, 112, 102546. [Google Scholar] [CrossRef]
  31. Demri, S.; Orlowska, E. Incomplete Information: Structure, Inference, Complexity; Springer: Heidelberg/Berlin, Germany, 2002. [Google Scholar]
  32. Bondy, J.; Murty, U. Graph Theory with Applications; Macmillan: London, UK, 1976. [Google Scholar]
Figure 1. A hypergraph.
Figure 1. A hypergraph.
Symmetry 17 00911 g001
Figure 2. A hypergraph and its dual.
Figure 2. A hypergraph and its dual.
Symmetry 17 00911 g002
Table 1. An IIS K 1 about student subject scores.
Table 1. An IIS K 1 about student subject scores.
Student   Chemistry (I)          Mathematics (S)          Geography (G)
x 1          Low     Medium      *
x 2          Medium      *      Medium
x 3          Medium      *      Medium
x 4          *      Medium      Low
x 5          Low      Medium      Low
x 6          Low      High      High
* stands for a missing value.
Table 2. Matrix representation of lower approximation sets.
Table 2. Matrix representation of lower approximation sets.
X χ X ( M R AT ( χ X ) ) A ̲ ( X )
{ x 6 } ( 0 , 0 , 0 , 0 , 0 , 1 ) T ( 0 , 0 , 0 , 0 , 0 , 1 ) T { x 6 }
{ x 2 , x 3 } ( 0 , 1 , 1 , 0 , 0 , 0 ) T ( 0 , 1 , 1 , 0 , 0 , 0 ) T { x 2 , x 3 }
{ x 1 , x 2 , x 4 } ( 1 , 1 , 0 , 1 , 0 , 0 ) T ( 0 , 0 , 0 , 0 , 0 , 0 ) T
{ x 1 , x 4 , x 5 } ( 1 , 0 , 0 , 1 , 1 , 0 ) T ( 1 , 0 , 0 , 1 , 1 , 0 ) T { x 1 , x 4 , x 5 }
{ x 1 , x 2 , x 3 , x 6 } ( 1 , 1 , 1 , 0 , 0 , 1 ) T ( 0 , 1 , 1 , 0 , 0 , 1 ) T { x 2 , x 3 , x 6 }
Table 3. Matrix representation of upper approximation sets.
Table 3. Matrix representation of upper approximation sets.
X χ X M R AT χ X A ¯ ( X )
{ x 1 , x 4 } ( 1 , 0 , 0 , 1 , 0 , 0 ) T ( 1 , 0 , 0 , 1 , 1 , 0 ) T { x 1 , x 4 , x 5 }
{ x 2 , x 3 , x 6 } ( 0 , 1 , 1 , 0 , 0 , 1 ) T ( 0 , 1 , 1 , 0 , 0 , 1 ) T { x 2 , x 3 , x 6 }
{ x 1 , x 2 , x 4 } ( 1 , 1 , 0 , 1 , 0 , 0 ) T ( 1 , 1 , 1 , 1 , 1 , 0 ) T { x 1 , x 2 , x 3 , x 4 , x 5 }
{ x 1 , x 4 , x 5 } ( 1 , 0 , 0 , 1 , 1 , 0 ) T ( 1 , 0 , 0 , 1 , 1 , 0 ) T { x 1 , x 4 , x 5 }
{ x 1 , x 2 , x 3 , x 6 } ( 1 , 1 , 1 , 0 , 0 , 1 ) T ( 1 , 1 , 1 , 1 , 1 , 1 ) T { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 }
Table 4. An IIS K 2 about cars.
Table 4. An IIS K 2 about cars.
CarPrice (P)Mileage (L)Size (S)Max-Speed (X)
x 1 HighMediumFullLow
x 2 Low*FullHigh
x 3 High*Compact*
x 4 HighHighCompactLow
x 5 LowHighFullLow
x 6 LowHighFullHigh
* stands for a missing value.
Table 5. An inconsistent IDS K 4 about cars.
Table 5. An inconsistent IDS K 4 about cars.
CarPrice (P)Mileage (L)Size (S)Max-Speed (X)Evaluation (I) Υ AT
x 1 HighHighFullLowGood{Good, Excellent}
x 2 LowMediumCompactHighPoor{Poor}
x 3 HighMediumCompactLowPoor{Good, Poor}
x 4 *MediumCompactHighPoor{Poor}
x 5 highMediumCompactLowGood{Good, Poor}
x 6 highHighFullLowExcellent{Excellent}
* stands for a missing value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, L.; Jiang, C. A Hypergraph-Based Approach to Attribute Reduction in an Incomplete Decision System. Symmetry 2025, 17, 911. https://doi.org/10.3390/sym17060911

AMA Style

Su L, Jiang C. A Hypergraph-Based Approach to Attribute Reduction in an Incomplete Decision System. Symmetry. 2025; 17(6):911. https://doi.org/10.3390/sym17060911

Chicago/Turabian Style

Su, Lirun, and Chunmao Jiang. 2025. "A Hypergraph-Based Approach to Attribute Reduction in an Incomplete Decision System" Symmetry 17, no. 6: 911. https://doi.org/10.3390/sym17060911

APA Style

Su, L., & Jiang, C. (2025). A Hypergraph-Based Approach to Attribute Reduction in an Incomplete Decision System. Symmetry, 17(6), 911. https://doi.org/10.3390/sym17060911

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop